Because my student evaluation of teaching scores are pretty good. Not the best, but OK. And SETs are terrible ways to assess teaching.
These kinds of evaluations are ubiquitous in the US university system, and they kind of drive me crazy: we’re expected to report the details of these numerical scores in our annual reports, I’ve been in meetings where we drone on about the statistics of these things, and of course everyone is expected to get above average scores on them. Personally, I find them totally useless, have no idea how to get a number 5 to a number 6, and basically ignore (except when making my yearly bureaucratic obeisance) the trivial 5 question numerical, so-called “quantitative” part of the student evaluations. What is far more useful are the short comments students get to make on the form: that actually tells me what parts of the class some students disliked, and what parts they found memorable and useful.
I’m not alone. Others find them useless, for good reasons.
There is one important difference between customer evaluations of commercial and educational service providers. Whereas with commercial providers ratings are unilateral, ratings are mutual in the education system. As well as students evaluating their teachers, instructors evaluate their students – such as by their exam performance. In US studies, these ratings have been found to be positively correlated: students who receive better grades also give more positive evaluations of their instructors. Furthermore, courses whose students earn higher grade point averages also receive more positive average ratings.
Proponents of SETs interpret these correlations as an indication of the validity of these evaluations as a measure of teacher effectiveness: students, they argue, learn more in courses that are taught well – therefore, they receive better grades. But critics argue that SETs assess students’ enjoyment of a course, which does not necessarily reflect the quality of teaching or their acquisition of knowledge. Many students would like to get good grades without having to invest too much time (because that would conflict with their social life or their ability to hold down part-time jobs). Therefore, instructors who require their students to attend classes and do a lot of demanding coursework are at risk of receiving poor ratings. And since poor teaching ratings could have damaging effects at their next salary review, instructors might decide to lower their course requirements and grade leniently. Thus, paradoxically, they become less effective teachers in order to achieve better teaching ratings.
The article goes on to show that by several criteria, what student evaluations actually assess is the easiness of a course, and how little the students are challenged by the material.
There’s more to it than that, of course. My campus has a lot of faculty who have won teaching awards, and we have a reputation for being demanding and resisting the trend towards grade inflation, and I know many of them are getting their high SET scores by being engaging and enthusiastic and making students think. Those are important aspects of teaching. But we ought to also be measuring faculty effectiveness at teaching the material, and those little forms don’t do it.
Because student ratings appear to reflect their enjoyment of a course and because teacher strategies that result in knowledge acquisition (such as requiring demanding homework and regular course attendance) decrease students’ course enjoyment, SETs are at best a biased measure of teacher effectiveness. Adopting them as one of the central planks of an exercise purporting to assess teaching excellence and dictating universities’ ability to raise tuition fees seems misguided at best.
Now throw in the fact that SETs are systematically biased against women faculty and that students tend to downgrade minority faculty (they are reflecting cultural biases all too well), and you’ve got a whole grand tower of required makework that doesn’t do the job, and also reinforces trends that we all say we oppose.



