Abstract
INTRODUCTION: Reflection is integral to the modern doctor's practice and, whilst it can take many forms, written reflection is commonly found on medical school curricula. Generative artificial intelligence (GenAI) is increasingly being used, including in the completion of written assignments in medical curricula. We sought to explore if educators can distinguish between GenAI- and student-authored reflections and what features they use to do so. METHODS: This was a mixed-methods study. Twenty-eight educators attended a 'think aloud' interview and were presented with a set of four reflections, either all authored by students, all by GenAI or a mixture. They were asked to identify who they thought had written the reflection, speaking aloud whilst they did so. Sensitivity (AI reflections correctly identified) and specificity (student reflections correctly identified) were then calculated, and the interview transcripts were analysed using thematic analysis. RESULTS: Educators were unable to reliably distinguish between student and GenAI-authored reflections. Sensitivity across the four reflections ranged from 0.36 (95% CI: 0.16-0.61) to 0.64 (95% CI: 0.39-0.84). Specificity ranged from 0.64 (95% CI: 0.39-0.84) to 0.86 (95% CI: 0.60-0.96). Thematic analysis revealed three main themes when considering what features of the reflection educators used to make judgements about authorship: features of writing, features of reflection and educators' preconceptions and experiences. DISCUSSION: This study demonstrates the challenges in differentiating between student- and GenAI-authored reflections, as well as highlighting the range of factors that influence this decision. Rather than developing ways to more accurately make this distinction or trying to stop students using GenAI, we suggest it could instead be harnessed to teach students reflective practice skills, and help students for whom written reflection in particular may be challenging.