I first encountered them at Oberlin 37 years ago last fall. At the end of the semester, a few of my professors passed out forms that they had designed themselves, in which they invited their students to critique their performance. I don't remember what the questions were, but they required thoughtful, paragraph-long answers. The professors handed them out and collected them themselves. That's because they intended to read them immediately and use them, if possible, to improve their teaching. Whether anybody else ever saw them I have no idea.
Fast forward to my first full-time teaching job eleven years later. I was at a large state university on the west coast. Student evaluations were now required in all classes. The form asked a series of multiple choice questions that were answered by filling in a "bubble" with a no. 2 pencil. They asked things like "The instructor is well prepared for each class section." A. Strongly agree. B. Somewhat agree. C. Agree. D. Moderately agree. E. Faintly agree. F. Neither agree nor disagree. G Faintly disagree. H. Moderately disagree. I. Somewhat disagree. J. Strongly disagree. K. No opinion. They were handed out and collected by a student volunteer, while the professor left the room.
Students had about 10 minutes in which to answer about 30 such questions, and, if they had any time left, they could also respond to a few questions on the back of the form that required written answers. Most didn't bother, having been taxed to the maximum by having to decide whether they "somewhat agreed" or only "slightly agreed" with the statement that the instructor used clearly established criteria to evaluate their work. The results of this exercise in hasty judgment, based on criteria that were neither clear nor established, were tabulated in a computer printout, which went into each instructor's permanent file. These results constituted evidence of "teaching effectiveness." Careers hung in the balance.
The experience I describe is, of course, familiar to anybody who has either attended or taught at any American college or university during the last three decades. Answers to multiple choice questions on a computerized form are used to determine whether A is a better teacher than B, because A "begins and ends class on time," while B doesn't.
I was therefore deeply gratified to hear someone (I didn't catch the name) point out on NPR the other morning that the universal dependence on these forms has had one rather obvious, and predictable, result. The scores that students give their professors directly correlate with the scores the professors give them. In even simpler language, the better grades a professor gives, and the less work he or she requires, the better student evaluations that professor will receive.
Given how obvious this is, and how long people have had to think about it, I am a little taken aback by the surprise occasioned by recent studies suggesting that most American college students are not getting much out of their education. After all, if they were challenged to write 20 pages a semester and work three hours outside of class for every hour they spend in class, and if they were given C's on their papers when they were convinced they deserved As, their professors would be fired.
Now, of course, we are all accustomed to evaluation fatigue. We cannot even call Sears to arrange to have a dishwasher installed without being invited to take a survey about whether we found our conversation with the service representative "excellent," "good," "average," "below average" or "poor." I have spoken to enough people working in service jobs to know that there is only one acceptable answer to such questions. If the 30-second conversation we had with them is rated anything less than 'excellent," heads will roll. I just hope I haven't gotten anybody fired by refusing to take the survey. The fact of the matter is, I've already answered that question. Or maybe it's already answered me.