Abstract
AIMS: This study aimed to examine surgical trainees' perceptions of perceived objectivity across clinical competency assessment methods and to integrate these findings into the design of a digital assessment platform. METHODS: A cross-sectional survey was conducted with 154 participants (47 senior medical students, 107 residents in postgraduate years 1-3). A validated questionnaire (Cronbach's alpha = 0.81) assessed perceived objectivity for four formats: multiple-choice testing (MCQ), oral examinations, Mini-Clinical Evaluation Exercise (Mini-CEX), and Objective Structured Clinical Examination (OSCE). Data were analyzed using descriptive statistics, non-parametric criteria, regression models, and thematic review of open responses. RESULTS: Mini-CEX (Median = 1.35, IQR = 1-2) and OSCE (Median = 1.51, IQR = 1-2) were rated most objective, while multiple-choice testing (MCQ) was least (Median = 2, IQR = 2-3). No significant differences were found between 6th- and 7th-year students (p > 0.05). Third-year residents, however, reported lower perceived objectivity for multiple-choice testing (MCQ) (p = 0.001). Regression showed gender predicted perceptions of multiple-choice testing (MCQ) (B = 0.377, p = 0.005), whereas age and training level were not significant. CONCLUSIONS: Practice-oriented assessments, particularly OSCE and Mini-CEX, were viewed as most objective. Findings support digital platforms that integrate varied, practice-based formats to ensure equitable and comprehensive evaluation of clinical competence.