David’s post on the “law porn” sent out by law schools to try to improve their US News rankings highlights a more general problem with the system. A substantial part (25%) of a school’s ranking depends on ratings by randomly selected professors at other schools. Another 15% is based on a survey of randomly selected lawyers and judges.
Here’s the problem: there are some 190 ABA-accredited law schools in the US. The average professor doesn’t know much about what is going on at the vast majority of them. If I spent my time keeping up with the faculty publications, curricula, student quality, and so forth, at the other 190 law schools, I wouldn’t have any time left over to do my own research and teaching. Realistically, I only know something about the top 30-40 schools (and even then with far from complete thoroughness), plus a handful of others that I am familiar with for some special reason (e.g. – I went there to do a presentation, and therefore know the faculty). I suspect that the same is true of the lawyers and judges. They too have their own work to do, and therefore can’t spend their time keeping track of the doings at dozens of law schools.
This doesn’t mean that the surveys are completely useless. Some valuable information can still be gleaned from them, especially if the errors of the ignorant US News voters somehow cancel each other out, leaving those knowledgeable about a given school to actually determine its ranking. However, I suspect that errors are not randomly distributed, and that there are some systematic biases. In particular, the voters are less likely to recognize the quality of schools that have recently improved their faculties and/or student bodies (this hurts George Mason, among others), less likely to give high rankings to schools outside major metro areas on the East and West coasts, and so on. I also suspect that the professors – and even more so the lawyers and judges – are likely to base their evaluations in part on what was true when they were in law school rather than regularly updating their evaluations of schools outside the top 20 or 30.
No doubt, there are also other biases that will affect survey responses in an environment where most of those surveyed are necessarily ignorant about the vast majority of the schools that they rate.