Effective use of Item Analysis to improve the Reliability and Validity of Undergraduate Medical Examinations: Evaluating the same exam over many years: a different approach

有效运用项目分析提高本科医学考试的信度和效度:对同一份试卷进行多年评估:一种不同的方法

阅读:1

Abstract

OBJECTIVE: MCQ exams are part of end-module assessments in undergraduate medical institutions. Item Analysis (IA) is the best tool to check their reliability and validity. It provides the Reliability Coefficient KR20, Difficulty Index (DI), Discrimination Index (DISC), and Distractor Efficiency (DE). Almost all research papers on IA are based on single exam analysis. We examined the IA of multiple exams of the same module, taken in four years. We aimed to explore the required consistency over the years and the effectiveness of IA-based post-exam measures. METHODOLOGY: Item Analysis of eight final MCQ exams of the Pediatric module from 2020-21 to 2023-24, at the Faculty of Medicine in Rabigh, King Abdulaziz University, Saudi Arabia, were included in the study. RESULTS: All exams had KR20 of 90 and above indicating excellent reliability. Difficulty levels were consistent except for a single year. Discriminative ability was maintained over the years. Only 28 out of 800 MCQs had a negative DISC. All exams maintained good DE. Only 15 MCQs over four years had zero DE. The practice of reviewing all Non-Functional Distractors yielded a gradual improvement in exam quality. CONCLUSION: Besides the IA of individual exams, it is also recommended that IA of the same exam be evaluated over 4-5 years to see consistency and trends towards improvement. It helps in improving the reliability and validity by addressing deficiencies and deviations from the recommended standards.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。