Law professor blogs love to debate the law review submission process, and in particular the pros and cons of student-edited journals. The most common complaints about the current system are that placements reflect author/school prestige and students just aren’t informed enough to separate better articles from worse ones. Over at Prawfs, Fabio Arcila comments: “There seems widespread dissatisfaction with [the existing] state of affairs, yet inertia continues to reign.”
Here’s a way to get over the inertia, or at least to get real empirical evidence of how serious the problem might be. I propose a study comparing placements to peer assessments that would work like this:
(1) Pick 10-15 articles in a particular area of law accepted for publication in a wide range of journals in the last year.
(2) Ask 10-15 accomplished scholars in that field to rank the quality of the articles (with author names and journal placements removed).
(3) Compare the explicit scholarly ranking with the prestige (and thus implicit ranking) of the law review placements.
If the assessments of the accomplished scholars closely or roughly match the assessments of the journals as measured by journal prestige, then the complaints about student selections and their overreliance on schools and authors probably don’t mean very much. On the other hand, if the journal placements and the assessments of top scholars show little or no correlation, then I think the study would provide a very real boost to the complaints about the law review placement system.
Oh, and I acknowledge that asking for the evaluations of top scholars has its own serious methodological problems. Those scholars have their own biases, and they may be influenced by author identity even if not provided (as most fields are pretty small — an expert can often figure out the author). But those biases don’t matter for these limited purposes, as the major complaint about student-edited journals is their inferiority to assessments by scholars.