Proceedings of The Physiological Society
University College Dublin (2009) Proc Physiol Soc 15, PC69
Validity of student peer assessment
A. J. Al-Modhefer1, L. E. Montgomery1
1. Centre for Biomedical Sciences Education, Queens University Belfast, Belfast, United Kingdom.
The assessment of students by their peers has been posited as a useful means of class evaluation, giving students an insight into the marking process, in addition to the mark received. This is often appropriate in assessing group work in oral and poster presentation, and is particularly valuable if both product and process are assessed (Race et al, 2005). Van den Berg et al (2006) point out that it is also useful in that it allows the students to work with colleagues in a way that they will do during their professional career. Wheater et al (2005) demonstrated that results from peer assessment are reproducible and comparable with the equivalent staff results, helping to allay fears that students will ‘over-mark’ or attribute marks based on their personal feelings towards other students. The aim of this study was to compare the marks given by staff and students for the same pieces of work at this institution to see how similar they were. The pieces of work involved were poster presentations in a second year Physiology/Pharmacology module, and oral presentations given as part of a third year Neuroscience Module. Results are expressed as mean ± standard error, and an unpaired t-test was used to compare data. Results were accepted as being statistically significant at the 95% level. Staff marks are significantly higher than those given by students in both Neuroscience oral presentations, (n=50, 73.6±0.7 Vs 68.9±0.4), and Physiology/Pharmacology poster presentations, (n=32, 70±0.6 Vs 67.4±0.6). The results clearly indicate that advanced students tend to ‘under’ mark each others work in comparison to academic staff. Fears that students maybe excessively lenient have proven unfounded. Similar trends have been shown in at least two other studies, both Heywood (2000) and Stefani (1994) found peer grading to be somewhat lower than staff grading. This merits further investigation to discover why this is the case, to enhance the learning process and facilitate any adjustments required in the guidance on attributing marks.
Where applicable, experiments conform with Society ethical requirements