Abstract
INTRODUCTION: The reliability of performance assessment scores can be affected by several factors, such as the number of students, raters, and the performance of raters. Minimizing inter-rater variability and ensuring its applicability are important factors in assessing medical education programs. This study aimed to examine the reliability coefficients derived from a generalizability study and a decision study (D-study) conducted within a two-facet cross-design using generalizability theory (G-theory) to assess performance in medical education. METHOD: This study employed a two-facet, crossed-mixed design [b × p × m]. A total of 40 randomly selected students were evaluated by five raters (random facets) using 35 items (fixed facets) in a performance assessment setting. Data were analyzed using EduG software. RESULTS: Of the participants, 142 (60%) were female and 98 (40%) were male. The total mean score for the crossed-designed set of skills was 62.11, the percentage of individual estimated variance components was 33.90%, and the G-coefficient was 0.94. For the D-study, the reliability coefficients were 0.86 for two raters, 0.90 for three raters, 0.92 for four raters, 0.94 for five raters, 0.95 for six raters, and 0.96 for seven raters. In the G-faced analyses, there were no differences between the raters. CONCLUSIONS: Inter-rater variability is a potential risk for limited performance evaluations, regardless of application design. Rater standardization is recommended to reduce the likelihood of such risks. In our study, rater standardization and D-studies were performed using applications with a crossed-mixed design. With the widespread use of these analyses, crossed designs strengthened by rater standardization in assessment and evaluation practices in medical education are preferred. Suitable rater standardization can make crossed designs a preferred option in performance assessment, and feedback can be obtained for subsequent ratings by analyzing them using G-theory.