< Back to Our thinking

Why Student Evaluation is Broken: Using Cookies and Science

Posted byRead bio

As we near closer to the end of the semester, faculties are most likely looking to survey their students through Student Evaluation Surveys (SES, or sometimes Course Evaluation Surveys, CES). This, interestingly for our times, is now being done online. Surveying our cohorts is completed in ritualistic fashion, and often is used as a proxy for decision-making on teaching and learning quality, valuable data on student opinion and perspective, as well as evaluations of the online and offline offering.

Many educators have been sceptical of this surveying approach due to issues on validity and reliability: what attitudes are these surveys looking to capture? How are they getting a representative sample of students? Is it just capturing of a polarised opinion? And – most importantly – how is this data being used to evaluate teaching performance? This also may have critical importance on teams of sessional or adjunct staff, who are wedding to results of these surveys for continuing employment, but also as a vague metric of performance.

Well, we’re here to remind you of two bits of important science that truly dispel the myth that these surveys are relevant or robust!

A UK study of Russell group institutions from 2019 showed that sample size is important (Holland, 2019); any SES with fewer than 20 respondents should be reconsidered (as opposed to a “proportion of the class” measure of reliability). Larger classes have a bias towards overall lower scores on average, due to lower means, as larger cohorts are less likely to agree. However, they do provide more rich insight due to the mandate of size.

It is also very important not to dismiss patterns when summarising SES results, as this study suggests that the variability of responses is what matters. It is best to consider individual cohort differences, such as whether the module is an elective, prior discipline of study, as well as gender (of both learner and teacher). These individual qualifiers, if taken into account and able to filter, lead to huge variances in the insights we can glean from our SES datasets.

The second is more widespread then we realise: attempting to influence students’ positive implications of their subject and teaching quality through gifts (Hessler et al., 2018). Namely; cookies! As expected, the cohort that were given cookies at the time of completing their SES rating the course much more positively than a control group. Furthermore, teaching evaluation was higher also! Anecdotally, this may be no surprise to the academics inclined to bring sweets to their final class, with a reminder for students to complete feedback immediately.

In psychological terms, we call this a recency bias; where students have a more favourable perception or memory of the class due to a more enjoyable recent encounter or experience. It’s the similar effect of having a delicious final dessert dish in a degustation menu -ensuring the diner goes home happy and raving to friends and family.

Of course, the biggest issue with these surveys: the outcomes and changes as a result of such feedback really only impact future cohorts. Many students feel disenfranchised by how late they are being surveyed in the term.

Educators everywhere should remind students of the value of the feedback they are providing to improve learning and teaching, but also make efforts to address at the beginning of semester how the course has evolved from past versions. We have a duty to show dynamism and responsiveness to our learners. As teachers, this is to improve our practice transparently. As researchers, we can show this is also informed by a reliable evidence base.

Works cited 

Hessler, M., Pöpping, D. M., Hollstein, H., Ohlenburg, H., Arnemann, P. H., Massoth, C., … & Wenk, M. (2018). Availability of cookies during an academic course session affects evaluation of teaching. Medical Education, 52(10), 1064-1072.

Holland, E. P. (2019). Making sense of module feedback: accounting for individual behaviours in student evaluations of teaching. Assessment & Evaluation in Higher Education, 44(6), 961-972.

Curious?

Send us a message – we’re always happy to chat.