Student Evaluations: How Valid Are They?
Preface: This blog post has been inspired by a forum post by my PIPD 3240 classmate, Eleanor Simpson http://moodle.vcc.ca/mod/forum/discuss.php?d=112283#p362853
When
I look back on some of the courses I took recently, I find myself asking 'what
did I learn' and 'who did I learn it from'?
In one course in particular I found I had to teach myself the
course after it was over as a large portion of our class time was spent jumping
back and forth regarding the assignments... If I did an evaluation now as
opposed to as the course was ending, my answers would be quite different.
Reflection is an essential aspect of self-evaluation: time
usually grants us perspective and clarity in all matters.
Upon reflection, the lesson I learned from the confusion in the
one course is that as an instructor it is easy to lose direction when you let
others 'steer the bus'. In the next course, I saw confusion coming and
stepped in to help keep the bus on track (must be all the years of rugby and
team work). Previously, I would have been hesitant to step in; however, after
the experience of having my learning compromised, I felt the need to be
assertive (as Dev taught me) and keep things on track.
After doing some research on the topic, it seems the issue is
still in a state of flux with no clearly superior model that addresses all the
issues which exist:
This article is a Q&A by University Affairs with
University of Toronto educational developer Pamela Gravestock.
Here are two quotes I found interesting:
UA: Should students be able to see course evaluations?
Dr. Gravestock: My
opinion is that they should be available to students; it closes that loop.
Students can see that the feedback they’ve provided is being used.
UA: I have heard some frustrated professors say that
course evaluations don’t give much direction on what they need to do to
improve. What would you say to that?
Dr. Gravestock: I
would relate that back in part to the instrument itself. Often the questions
are not the right questions. General questions about the instructor’s
effectiveness aren’t going to tell you what’s going on. Also, faculty are often
just given this information and no one guides them through it. Educational
developers are really well-positioned to help instructors in interpreting the
data and figuring out next steps – a plug for my profession!
Here is a link to by Pamela Gravestock and Emily
Gregor-Greenleaf's report on student evaluations:
Section 4B of Gravestock’s report discusses the validity and
reliability of students as evaluators and concludes that:
1. "Students are reliable and effective at evaluating
teaching behaviours (for example, presentation, clarity, organization and
active learning techniques), the amount they have learned, the ease or
difficulty of their learning experience in the course, the workload in the
course and the validity and value of the assessment used in the course"
(Gravestock, 2008)
2. However, "students may not be qualified to assess the
level, amount and accuracy of course content and an instructor’s knowledge of,
or competency in, his or her discipline. [Moreover] "Such factors cannot
be accurately assessed by students due to their limited experience and
knowledge of a particular discipline. It has also been suggested that students
are unable to evaluate instructor grading practices and methods of delivery,
appropriateness of selected readings and whether instructors present any bias
in their delivery of course" (Gravestock, 2008)
Finally, in article titled "Do student evaluations measure
teaching effectiveness?” by Philip Stark, a statistics professor at Berkley
University suggests Student Evaluations are not as effective as administrators
believe they are.
Starks key points are:
● How good are the statistics? Teaching
evaluation data are typically spotty and the techniques used to summarize
evaluations and compare instructors or courses are generally statistically
inappropriate.
● What do the data measure? While students are in
a good position to evaluate some aspects of teaching, there is compelling
empirical evidence that student evaluations are only tenuously connected to
overall teaching effectiveness.[2] Responses to general questions, such
as overall effectiveness, are particularly influenced by factors unrelated to
learning outcomes, such as the gender, ethnicity, and attractiveness of the
instructor.
● What’s better? Other ways of evaluating
teaching can be combined with student teaching evaluations to produce a more
reliable, meaningful, and useful composite; such methods were used in a pilot
in the Department of Statistics in spring 2013 and are now department policy.
Formal
Citation of Report
Gravestock, Pamela. Gregor-Greenleaf, Emily. (2008) Report:
Student Course Evaluations:
Research, Models and Trends. Toronto: University
of Toronto.
No comments:
Post a Comment