Human Learning: No the machines have not entirely kicked-us out and we humans still must make important decisions.

For years, many of us have attended various e-discovery conferences and meetings where we listen to countless hours of presentations and engage in discussions about TAR, computer assisted review, machine learning, e-discovery’s past and the future.  However, having recently returned from one such discussion, it struck me that little time is actually spent discussing how to better train and develop e-discovery attorneys.  Entire webinars are dedicated to tracking overturns during Relativity Assisted Review to help improve machine learning, but so little is said about how to better train and develop the actual attorneys making the decisions.

Why is this?  I have a few hypotheses, but most are disappointingly prejudiced by my contract attorney friends who often complain that no one cares about first level reviewers; just their review rate and billable hours.

To this I say, wait, what?!?  I careWe careMy boss certainly cares!  And our clients care!  Why on earth would they say that first level reviewer quality doesn’t matter?  Even worse, why do they believe it?!?  What would possibly lead them to this conclusion?

Maybe it is because in the vast majority of cases document review attorneys tell anecdotal employment stories of being provided a small dreadful space, a case manual that may or may not include the actual discovery requests, a set of office rules that range between the draconian and absurd, and absolutely no feedback unless it is about breaking one of said rules.

Put plainly, I believe there is a better way.  Workspace and office rules aside, that machine we spend so much time teaching can also be employed to implement workflows that assist the growth, learning and development of the entire case team.  Leaving aside the importance of better case outlines, fleshed out issue code manuals and shared and updated Q&A’s, I would like to share an example workflow that we employ to provide every case member real-time iterative feedback on the coding decisions they make.

Iterative Feedback

What it is         Employed properly, iterative feedback provides those up and down the review chain the opportunity to identify areas where additional case understanding is possible, and further enhances the PM’s ability to quickly recognize areas of concern.  Further, it provides the first level reviewer instant analysis of her/his work in a non-confrontational manner designed to foster better case understanding and more accurate and efficient coding.

How it works:   During the first few days of a review, the first step of the iterative feedback process occurs entirely behind the scenes.  The first level reviewer codes documents using their normal first level template.  As that occurs, our supervisors use a second level template to not only find and correct any errors made by the first level team, but also to track and explain the type of error and reason for correction.  Thus, this second level template includes all first level fields and choices, but also adds two additional fields ‘Overturn’ and ‘Overturn Attorney Note’:

The second level team then goes about its regular business of QC’ing and as they identify documents that were incorrectly coded, they change the calls as necessary, but also provide feedback as to reason for the overturn.

After the second level team has had a chance to review the initial set of the first level reviewer’s documents, the first level reviewer is then provided a personal saved search that can only be viewed by him/her that is automatically populated with the sample overturned documents identified by the second level team.  The first level reviewer will then have an opportunity to agree with overturn and “clear” the document to the next level, or they can flag the document for further discussion with the supervisor and the team.

On a daily basis the first level team member is asked to review their overturned documents in QC Clearance layout, which will serve as the primary layout during the remainder of the QC process.

Once in the layout the reviewer either identifies the relevant information and accepts the overturn call or has the option to challenge the second level decision, thereby kicking the document back to the second level team.

Similarly, the second level team has their own saved search populated with challenges from the first level reviewers.  Once in the review platform, the second level reviewer can either accept/reject the challenge or elevate an unresolved question to the PM for a final call.

From the PM’s perspective, this iterative feedback loop serves multiple purposes.  In addition to ensuring that the case team is on the same page, it provides quantifiable feedback as to the performance of each attorney beyond simple review rates.  In the above example, the PM can weigh the importance of any inaccuracies (e.g., missing key documents versus merely missing an issue code) to develop her or his own algorithm to assess who should move-up the team hierarchy, who might need to be cut when the case culls and who to fight to retain for future projects.  It might also help identify areas where the coding guidance needs clarification.

Further, when coupled with the pivot function in Relativity 9.3 and later, the PM might add a search to her/his dashboard that both provides graphical evidence of where each person stands, but also allows immediate access to the actual documents at issue.

*** OT counts are not from an actual case. Our esteemed discovery associates would scoff at counts this high.

Similarly the PM can build out a pivot to assess which attorneys are more timely and diligent in participating in the iterative feedback loop:

And finally, the PM can build out a pivot to provide instant analysis on the second level team:

One potential criticism of the iterative feedback loop is the added time and expense needed to build out the additional Relativity layouts, as well as all the added communications between the first and second level review teams at the beginning of a review.  However, through experience, we have learned that the additional time spent during this initial learning period is always far outweighed by the case efficiencies of having a well-trained, invested, and vigilant attorney review team.

Leave a Reply

Your email address will not be published. Required fields are marked *