Feeds:
Posts
Comments

Posts Tagged ‘Student Evaluations’

With a new school comes a new student evaluation form. Having taught in various capacities at four institutions, I have now experienced evaluations on four different forms, the most recent of which I got back after the fall semester. As I’ve said in the past, I don’t know if these forms measure what some administrators think they measure, but they do provide some insight into student satisfaction with our courses. Like my previous institutional transition, where the evaluations went from questioning the quality of class discussions to questioning whether I tried to have students discuss things, this new evaluation form demonstrates some of the things that an institutional committee agreed might be important while also showing how flawed this system is.

At my current institution, the evaluations measure things like students’ rating of me, the course, my grading, my assignments, and my course materials, indicating that all of these things are important. (My all-time favorite evaluation question was how close my course came to a student’s perceived “ideal college course” – talk about a high bar!) These items are measured on a five-point scale ranging from “poor” to “excellent.” The problem is that the scale is unbalanced, meaning that “poor” is really the only negative option and the other four are varying degrees of good. I suppose that this might allow administrators who look at my evaluations to see how positively students viewed my courses, but it also means that a rating like “acceptable” that falls in the middle of the scale looks like “neutral-to-bad” to administrators.

As I expected, my evaluations took a hit upon changing institutions. This is the aspect of the experience that led me to realize the ways that we reify student evaluations. By the last few semesters at my previous institution, evaluations for one difficult course were almost universally positive. The evaluations at my new institution for largely the same course were not nearly as positive. Why? Because I had responded to years of student feedback on a few particular areas of the course at my previous institution and then the instrument used to measure that feedback changed. Now I will begin the process over again, responding to feedback in new areas that will help me hone my course into one that students don’t have as much to complain about on course evaluations. This doesn’t mean that my new course will be “better,” just that it will better reflect the areas that my new institution deems worthy of student evaluation.

The thing is, if I hadn’t changed institutions again I might have forgotten the degree to which I’ve been effectively teaching to the evaluations over the past five years and simply accepted that I had become a master teacher. Even recognizing this, there isn’t much that I can do about it since these are the measures that will contribute to determining my future. As a new faculty member, it is comforting to think that lower evaluations are not only about me. The trick is to remember this fact as the evaluations rise over time.

“Like” Memoirs of a SLACer on Facebook to receive updates and links about teaching to tests, evaluations, and maybe even students via your news feed.

 

Read Full Post »

After every semester, when I’ve recovered from grading, I get to experience the joy of getting my student evaluations back. (See previous posts on evaluations here and here.) Other than students who complain about having to write papers or complete readings or take exams, one thing that has always bothered me about evaluations is the fact that one or two students will inevitably give me a less-than-perfect score on something like “arrives to class on time” or “returns graded assignments within a cycle of the moon.” This bothers me because I always arrive to class on time and return graded assignments within a cycle of the moon, so any student who thinks I do not is either lying or not paying attention.

This semester I had students in one of my courses complete a group project and, when the project was complete, evaluate their group members, which gave me some insight into these frustrating experiences. When looking over student a few students’ evaluations of their group members, I noticed that they assigned their group members a 4/5 on measures like “completed his or her share of the work” and “contributed ideas to the group.” The interesting thing was not the 4/5 (some students rated their group members much lower than this – after all, nobody wants to deal with students!) – but that some students assigned 4/5 in these categories to both their group members and themselves. Apparently, nobody in these groups completed their full share of the work or contributed ideas to the group. It seems like I would have realized this based on other situations, but I guess that it is true for evaluations as well – some students will never be satisfied, even with themselves.

Read Full Post »

Last semester was possibly my most frustrating as an instructor, given that two of my courses had lower-than-normal levels of class participation. Having finally received my student evaluations from the fall, it appears that my frustration was felt by at least a few of my students. Numerically, my evaluations were similar to other semesters. Qualitatively, though, it appears that a higher number of students who would have normally left the comments section blank were compelled to complain. Here are some of my favorite quotes:

“Very negative attitude towards teaching. Often made rude comments to students for no reason… Terrible class, terrible professor.”

“Dr. Smith tends to be rude and misunderstanding towards his students. It would be appreciated that he shows his students the respect he demands as a professor. He doesn’t relate well to college life and all that it entails.”

“he is a good teacher but he is kind of mean sometimes & comes off indifferent to helping.”

“When talking to students in class or when commenting on a student’s answer to a question, it would be nice not to receive a smartass answer/comment in response.”

“Snide comments were made to multiple students and I was offended by his ego. He acts as though he is better than us simply because he has a PhD. My suggestion would be to tone down the sarcasm.”

If one looked only at the comments above, I would seem to be a terrible professor. I understand that not all students appreciate sarcasm, and that my responses were likely harsher last semester than most. Thankfully, there were also a few students who seemed to enjoy my courses. When compiling evaluations for review by others, I always follow a negative evaluation with a positive one that contradicts it. Toward this end:

“You were a great professor. You were able to relate to us but keep respect.”

“Dr. Smith needs to be less enthusiastic with his teaching and try to be more boring and even more unpredictable with grading and pop-quizzes. His energy level is far too high for someone like me and it amazes me how someone like that can become a professor (just kidding, Dr. Smith is awesome).”

“Great professor. Very knowledgeable and always willing to help.”

Thankfully for both my students and me, this semester has been much less frustrating than last.

Read Full Post »

Given my former statement that instructors are letting students off the hook for their failure to complete assigned readings, I have tried to hold students to higher standards when grading.  This is especially true for writing assignments.  This includes requiring students to have things like thesis statements that they support with relevant examples.  In one course, I required students to write brief summaries of some topic that had stood out to them during the previous section of the course, asking them to combine the information in their readings to look at something from a different perspective.  These papers were okay at best.

Although there was some improvement as the semester went on, students seemed nearly incapable of writing an original thesis statement and supporting that statement with data.  While I am not sure why this is the case, I was interested in on particular comment on a student’s course evaluation:  “Dr. Smith asked us to write summary papers after each unit.  When he graded the first papers, he graded them as persuasive essays, expecting an argument and support in the papers.  This made it difficult to write the papers.”  Based on this sentence, I’m not sure what exactly made the papers difficult to write (the combination of summary and argument? conflicting instructions and grading?) but I was struck by the use of the term “persuasive essays.”  To me, all essays should be persuasive.  This student, however, considers persuasive essays to be a particular type of writing that is separate from most writing.  In future classes I’m going to explore this language further to see if I can help students bridge the gap between persuasive essays and essays.

Read Full Post »

In some ways, teaching evaluations are the most important reflection of my performance over the course of the past semester.  While the reports of peer evaluators will appear in my tenure file, a one-class observation may not hold as much weight as these 15-minute student responses to a semester’s worth of work.  Setting aside the hotly contested issue of whether student evaluations tell us anything at all about one’s teaching abilities, the fact that we are required to give them and that others are required to look at them leads to the question, as Female Science Professor points out, of when.

To some extent, this is dictated by institutional guidelines.  In grad school I typically made evaluations the last task of the last class period.  Influenced by a professor I had been a graduate assistant for, I also tried to give a brief talk highlighting the progress students had made over the course of the semester that was intended both to wrap up the semester and leave students with positive thoughts about the course before they evaluated it.  Maybe because of this practice, I have always been in favor of end-of-the-semester evaluation administration.

Last year, however, the deadline for evaluations at my new school was a week before the end of the semester, forcing me to rethink my timing.  Without the last day as an option, I had to consider the issues involved.  For example, in order to ensure that all students would be present for the evaluations, it made the most sense to give them on a day that an assignment was due.  Of course, this is related to the questions of whether students will think more negatively about a class after staying up late to complete an assignment and whether it is actually better to give the evaluations on a day that some students miss class, since the students who skip a class close to the end of the semester may not be the best students and, hence, may not give the most positive evaluations.  How would it look, though, if one fifth of a class (5 out of 25) did not take the evaluations?

Because I want to make sure that the students who have been most engaged over the course of the semester complete evaluations, I find myself giving evaluations on days that an assignment is due.  Despite the work they have just put in, my hope is that this is better than giving evaluations when before an assignment is due when they are still feeling the stress of a looming deadline.  In the end, though, I’m not convinced that anything I do will actually make a difference in a given student’s evaluation of my course.  There is a considerable amount of research on student evaluations but unless my scores decrease dramatically I guess I believe that it is better to spend my time preparing good courses than trying to game the system.

Read Full Post »

At some schools, the biggest transition for new faculty is probably related to learning the ins and outs of departmental politics.  Luckily, my own department does not have much in the way of politics.  I have, however, noted some interesting campus politics.  During a recent conversation about student evaluations I found myself with several faculty members from the humanities who appear to have an inherent distrust of the process.

Obviously, lots of people dislike evaluations, but I’ve never talked to anybody who distrusts them like these professors from the humanities.  The fact is, I’ve always approached student evaluations from the stand point of the social sciences.  As such, evaluations are one way of collecting data about the ever-elusive student satisfaction.  As a sociologist, I’ve never questioned whether surveys were a valid method of data collection.  While survey methods are not perfect, they do reflect something about students’ reactions to what we do in the classroom, even if that something is not what we intended to measure.  This allows us to compare the reactions of our most recent students to those in the past using a standardized set of questions.

In contrast to the attitudes toward surveys that I developed in years of sociology courses, my colleagues in the humanities likely spent their graduate school days wrestling with debates about what constitutes a text.  For them, bubble sheets and numeric printouts are a mysterious entity that others (such as the members of the administration who have backgrounds in the social sciences) can manipulate to suit their needs.  While I strongly believe that this distrust is misplaced, this glimpse into campus politics was eye opening.

Read Full Post »

When I received my course evaluations for my first semester as a real professor, my previous experiences with the differences between my current and former students caused some concern.  Due to the amount of things I had to do near the end of the fall semester I had never even looked closely at the evaluation form until the registrar returned the completed forms to me.

Looking at the evaluations, I was struck by two things:  1) my teaching looked good numerically; and 2) these numbers told me next to nothing about the way students perceived my courses.  The item related to class discussions provides a good example.  I have always considered class discussions to be one of the weaker areas of my teaching, no matter how many teaching seminars on the topic I attended (maybe my students didn’t discuss things because they weren’t doing the reading).  Items asking students about the quality of class discussions reflected this (in the subtle way that a difference of .03 on a five-point scale can reflect something).  Looking over my newly opened evaluations, however, I was struck by the fact that the only question about class discussions was related to whether I encouraged them.  I did well on this item, having spent several minutes of each class prodding students to discuss things as a class.  There was no corresponding item, however, about whether my attempts at promoting class discussion were successful.  Any student assessments of the quality of class discussions would have to be offered spontaneously by students on the qualitative portion of the evaluations.

As a result, what I feel was the weakest portion of my courses received an apparently strong quantitative evaluation and a nearly-nonexistent qualitative evaluation.  While I was nervous before opening my evaluations, my feelings afterward were closer to apathy.  Nearly every semester I need to remind students that, no, merely showing up does not count as class participation.  Based on the current evaluation form, though, it seems that professors at my school are being held to this sort of “A for effort” standard.

Read Full Post »