Abstract
INTRODUCTION: Standardized patients (SPs) are integral to training medical students in virtual consultation (VC) skills. A key aspect of their role is the ability to evaluate student performance and deliver meaningful feedback to enhance learning. This study aimed to design and validate a scoring rubric specifically for SPs to assess undergraduate medical students' performance during VC encounters. METHODS: This study adopted a seven-step approach to rubric development. The content and face validation of the rubric were conducted with the participation of relevant stakeholders. The reliability of the rubric was assessed by measuring the internal consistency and the inter-rater, intra-rater and test-retest correlation coefficient of the SPs evaluation scores of medical students' performance in VC. RESULTS: A rubric comprising ten evaluation dimensions and a four-level scoring scale was developed. The item content validity index (I-CVI) exceeded 0.78, while the average content validity index (AVE-CVI) was 0.98. Face validity was assessed qualitatively which confirmed the rubric's clarity and relevance. Five SPs participated in evaluating medical student's performance in VC encounters, each reviewing 35 videos. The rubric demonstrated strong internal consistency, with a Cronbach's alpha of 0.82. Additionally, inter-rater, intra-rater, and test-retest correlation coefficients indicated a high level of reliability. CONCLUSION: The newly developed SP assessment rubric of medical students' VC skills has reached the desired level of psychometric property making it suitable for use in training and assessing VC competency. Future research is required to test its applicability to be used in an alternative form, for example as a guide for SPs to provide formative verbal feedback to medical students.