Abstract
OBJECTIVE: MCQ exams are part of end-module assessments in undergraduate medical institutions. Item Analysis (IA) is the best tool to check their reliability and validity. It provides the Reliability Coefficient KR20, Difficulty Index (DI), Discrimination Index (DISC), and Distractor Efficiency (DE). Almost all research papers on IA are based on single exam analysis. We examined the IA of multiple exams of the same module, taken in four years. We aimed to explore the required consistency over the years and the effectiveness of IA-based post-exam measures. METHODOLOGY: Item Analysis of eight final MCQ exams of the Pediatric module from 2020-21 to 2023-24, at the Faculty of Medicine in Rabigh, King Abdulaziz University, Saudi Arabia, were included in the study. RESULTS: All exams had KR20 of 90 and above indicating excellent reliability. Difficulty levels were consistent except for a single year. Discriminative ability was maintained over the years. Only 28 out of 800 MCQs had a negative DISC. All exams maintained good DE. Only 15 MCQs over four years had zero DE. The practice of reviewing all Non-Functional Distractors yielded a gradual improvement in exam quality. CONCLUSION: Besides the IA of individual exams, it is also recommended that IA of the same exam be evaluated over 4-5 years to see consistency and trends towards improvement. It helps in improving the reliability and validity by addressing deficiencies and deviations from the recommended standards.