Evaluating Student Evaluations

In a recent New York Times essay, famous English and law professor Stanley Fish takes aim at student evaluations of professors:

[T]hey measure present satisfaction in relation to a set of expectations that may have little to do with the deep efficacy of learning. Students tend to like everything neatly laid out; they want to know exactly where they are; they don’t welcome the introduction of multiple perspectives, especially when no master perspective reconciles them; they want the answers.

But sometimes (although not always) effective teaching involves the deliberate inducing of confusion, the withholding of clarity, the refusal to provide answers; sometimes a class or an entire semester is spent being taken down various garden paths leading to dead ends that require inquiry to begin all over again, with the same discombobulating result; sometimes your expectations have been systematically disappointed. And sometimes that disappointment, while extremely annoying at the moment, is the sign that you’ve just been the beneficiary of a great course, although you may not realize it for decades.

Anne Althouse seemingly endorses much of Fish’s critique, while Normblog and Glenn “Instapundit” Reynolds express some reservations.

Student evaluations do indeed have plenty of flaws. They are often influenced by educationally irrelevant considerations such as the harshness of the professor’s grading, whether he or she is physically attractive, and whether the student agrees with the professor’s political views on the subject being taught.

That said, I think Fish greatly underrates their potential usefulness. In my experience, student evaluations are extremely helpful in gauging several important elements of teaching: whether the professor’s lectures are clear and well-organized, whether he keeps the course on schedule and covers the material laid out in the syllabus, and whether the professor mistreats students by acting like an obnoxious jerk. On a couple occasions in my career, I got mediocre evaluations in part because I fell short in one of these areas. Those evaluations made painful reading for me. Indeed, I think that one reason why many academics hate student evaluations is that no one likes to read harsh, anonymous criticisms of their performance. The anonymity, of course, tends to increase the harshness. Still, attending to these evaluations helped me correct some mistakes and probably made me a better professor than I would have been otherwise.

I also disagree somewhat with Fish’s claim that good teaching often involves “the deliberate inducing of confusion, the withholding of clarity, the refusal to provide answers.” If Fish merely means to suggest that professors should point out to students that some of the issues covered in the class are disputed, with serious arguments on both sides, I have no quarrel with him. However, making students aware of conflicting perspectives is not the same thing as “inducing . . . confusion” or “withholding clarity.” To the contrary, in such cases a good professor should make sure that the students clearly understand the logic of the opposing views and why they clash with each other. In my experience, if you clearly explain to students that there is a disagreement over a major issue and what the contending views are, they are unlikely to complain about “refusal to provide answers.”

For example, when I teach Constitutional Law I, I try to explain to the students what originalism is, and also the most important of the many alternative perspectives on constitutional theory. By the end of the class, students should realize that there is a serious longstanding debate over the subject. But if I have done my job and the students have done their homework, they should not be confused about what originalism is or what the major arguments against it are.

Fish’s suggestion that students can’t tell how good a professor’s performance was until years after taking the course is also overdrawn. In thinking back on the classes I took as a student, there are very few if any cases where my current evaluation of the professor’s performance differs greatly from what I thought at the time. This is true even of classes on subjects on which I have since become an expert and was not one when I took the course. Even if students writing evaluations at the end of a semester can’t yet effectively evaluate the value of the knowledge the professor imparted to them, they can certainly judge the quality of his presentation of that information.

In sum, student evaluations have flaws. But they also have important benefits. To accentuate those benefits, evaluation forms should ask students specific questions about those aspects of the professor’s performance on which the students are most likely to have good insight: clarity, organization, scheduling, and fair treatment. Students are less likely to be able to accurately evaluate the professor’s expertise in his subject or whether he gave an adequate survey of the relevant issues and perspectives.

In judging professors’ performance as teachers, student evaluations should not be the only tool used (and they rarely are). But neither are they as useless as Fish claims.