The College of Charleston has recently moved to a paperless, online-only course instruction evaluation system. The obvious benefit of the new system is that instructors are not required to use class time for student evaluations, and no students are required to shuffle sealed envelopes from one building to another once the evaluations are complete. I’m a big proponent of technology-enhanced learning and while I appreciate the time (and environmental) savings of the new system, I find myself frustrated with it. One problem is that every semester, there are problems with a very low response rate. Any of our Math 104 (“Elementary Statistics”) students can tell you about the issues with a voluntary response sample.
But the low response rate isn’t my main problem with the evaluations. In an ideal world, the course evaluations would provide statistically meaningful data that is useful in helping me guide course design, structure, and content. Unfortunately, the evaluations don’t do this. For example, one question asks students to rate (using a Likert scale) the statement, “The instructor showed enthusiasm for teaching the subject.” Yes, I am enthusiastic in my classroom (both about teaching and about mathematics), and I am happy that my students notice and enjoy my enthusiasm. But this doesn’t help me teach the course better. I would prefer student feedback on statements like, “In this course I learned to work cooperatively with my peers to learn mathematical concepts.”
Overall, my issue with the evaluations is that the questions posed are teacher-centered instead of learner-centered. Example: Rate the statement “Overall this instructor is an effective teacher.” This statement removes the student’s responsibility for their own learning. Compare with the following: Rate the statement “Overall in this course I developed skills as an effective learner.” The biggest goal I have in a mathematics course is to provide students with problem solving skills that they can use beyond my classroom. If a professor often gives a fantastic lecture, then that’s great; but that may not be helpful to students five years from now. Instead I hope to give students skills, practice, and experience in critical thinking, problem solving, complex reasoning, etc. Rating whether or not they’ve learned these skills is more important than rating “Overall, the required textbook was useful.”
Of course, figuring out how students have grown academically or intellectually is difficult. In this semester’s Precalculus classes, I’m working together with another instructor on designing course content. One of the things we decided to do was to use something similar to the Student Assessment of their Learning Gains (SALG) tool in an attempt to gather data on student progress through the course. Initially, the students are asked to take a benchmark SALG survey and they will repeat a similar survey two to three times throughout this semester. We are hoping to gather meaningful data on the growth of their skills by tracking things like whether they are in the habit of “using systematic reasoning in the approach to problems” or “using a critical approach to analyze arguments in daily life.” Hopefully this data will prove useful as we continue to tweak the course moving forward.