Are evaluations in simulated medical encounters reliable among rater types? A comparison between standardized patient and outside observer ratings of OSCEs

模拟医疗情境中的评估结果在不同评估者类型之间是否具有可靠性?标准化病人与外部观察者对客观结构化临床考试(OSCE)评估结果的比较

阅读:1

Abstract

OBJECTIVE: By analyzing Objective Structured Clinical Examination (OSCE) evaluations of first-year interns' communication with standardized patients (SP), our study aimed to examine the differences between ratings of SPs and a set of outside observers with training in healthcare communication. METHODS: Immediately following completion of OSCEs, SPs evaluated interns' communication skills using 30 items. Later, two observers independently coded video recordings using the same items. We conducted two-tailed t-tests to examine differences between SP and observers' ratings. RESULTS: Rater scores differed significantly on 21 items (p < .05), with 20 of the 21 differences due to higher SP in-person evaluation scores. Items most divergent between SPs and observers included items related to empathic communication and nonverbal communication. CONCLUSION: Differences between SP and observer ratings should be further investigated to determine if additional rater training is needed or if a revised evaluation measure is needed. Educators may benefit from adjusting evaluation criteria to decrease the number of items raters must complete and may do so by encompassing more global questions regarding various criteria. Furthermore, evaluation measures may be strengthened by undergoing reliability and validity testing. INNOVATION: This study highlights the strengths and limitations to rater types (observers or SPs), as well as evaluation methods (recorded or in-person).

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。