Abstract
BACKGROUND: Assessment plays a crucial role in medical education providing both measures of student performance and feedback for curriculum improvement. This study evaluates the quality of A-type (single-best-answer) multiple-choice questions (MCQs) used in the Fixed Prosthodontics exams over three academic years (2020-2022) at a Sudanese dental school, using psychometric item analysis. METHODS: This cross-sectional study (February-April 2023) retrospectively analyzed 155 MCQs administered between 2020 and 2022. Each item was assessed for item-writing flaws (IWFs), difficulty index (DIF: percentage of correct responses), discrimination index (DIS: ability to differentiate between highest and lowest 27% of examinees), and reliability (Cronbach's alpha). The normality of DIF and DIS data was tested; Spearman's or Pearson's correlation was used as appropriate. RESULTS: The percentage of items with IWFs was 48% (2020), 42% (2021), and 33% (2022). Mean ± SD of the DIF index values were (75.51 ± 8.47), (72.25 ± 7.93), and (60 ± 11.63). While Mean ± SD of the DIS index values were (0.18 ± 0.17), (0.13 ± 0.16), and (0.21 ± 0.13) respectively. Cronbach's alpha, coefficients (α) increased from 0.69 (2020) and 0.64 (2021) to 0.76 (2022). There were no significant correlations between DIF and DIS across the three years (p > 0.05). CONCLUSION: Item-writing quality improved from 2020 to 2022, with the proportion of flawed items decreasing and reliability increasing. Most items were easy in 2020 and 2021, with more excellent difficulty in 2022. Discrimination improved but remained suboptimal. There was no significant correlation between difficulty and discrimination indices. Routine psychometric analysis and enhanced faculty training are recommended to maintain assessment standards in dental education.