Proceedings of The Physiological Society

Europhysiology 2018 (London, UK) (2018) Proc Physiol Soc 41, C117

Oral Communications

A simple assessment instrument for physiology coursework; does it drive learning?

S. M. Roe1

1. Centre for Biomedical Sciences Education, School of Medicine, Dentistry and Biomedical Science, Queens University Belfast, Belfast, Antrim, United Kingdom.

It is well established that assessment drives learning behavior in students (Biggs & Tang, 2007, Ramsden, 1992). Rowntree (1987) cited six purposes of assessment including maintenance of standards, motivating students, motivating teachers, and preparing students for professional life. Indeed, Davies (1994) has posited that a curriculum designed to promote deep learning is pointless if the assessment encourages students to adopt a surface approach. With the aim of using assessment to promote learning, I have developed a coursework feedback instrument for use within the Centre for Biomedical Science Education in QUB. Using this instrument, several criteria (alterable depending on the modality of assessment; essay, poster, oral presentation, or experimental write-up) are graded on a conceptual scale from first class honours to pass. These different grades are then combined and a final mark given with comments. I have had staff give a commitment to have this detailed, personalised and signed coursework returned within two weeks of the submission date. To evaluate the effectiveness of the feedback instrument in promoting learning, a questionnaire was distributed to a Scientific Methods class on the day that two pieces of coursework (a referencing exercise and a critical review of a scientific article) were returned. Questions were posed on the effectiveness of the feedback in driving student learning, whether it would change learning behavior and whether it initiated a dialogue between students and staff. Anecdotal evidence suggested that increased feedback made student marks a starting point for a negotiation on grades. To address this, a further 2 questions were asked on the perceived fairness of the process. A 5 point Likert scale was used to evaluate the response to each of the questions with 5 indicating strong agreement with a statement and 1 strong disagreement. Ratings are given as mean marks out of 5 ± S.E.M, n = 47. In addition to the Likert questions, there were open ended questions asking students about the positives and negatives of the process and how it could be improved. Students responded with Likert scores of 4.4 ± 0.8 agreeing that the assessment and feedback tools used here had driven their learning and 4.2 ± 0.8 agreeing that the feedback had altered how they would approach assessment in the future. The assessment was considered fair (4.7 ± 0.1), and was considered to have initiated a dialogue between students and staff (4.1 ± 0.1). There was a reassuringly low score in response to the question asking whether the feedback had made students disappointed with the grade achieved (1.9 ± 0.2). These findings suggest positive engagement with the assessment process and are encouraging about students maturity and ability to take direction from academics, once given appropriately detailed and timely feedback.

Where applicable, experiments conform with Society ethical requirements