Abstract
Artificial intelligence has been widely used to answer questions in the medical context. This study aimed to evaluate the performance, reliability, and precision of ChatGPT-4.0 in responding to multiple-choice questions (MCQs) previously administered to medical students. We conducted an observational and cross-sectional study to assess the performance of ChatGPT by analyzing its accuracy, examining associations with specific knowledge areas and Bloom's taxonomy levels, assessing the influence of the psychometric properties of the items, and investigating the effect of images on the results. From the eight examinations analyzed, chatbot performance varied from 46.7 to 90.0% on the first attempt, 47.5 to 90% on the second attempt, and 28.3 to 89.2% on the third attempt. The concordance rate varied from 56.2% to 62.0% with Cohen's kappa coefficients ranging from 0.071 to 0.217. On the second and third attempts, basic science had the highest scores (90.0 and 93.3%, respectively), whereas surgery (55.8%) and pediatrics (43.4%) had the lowest scores. In summary, the chatbot demonstrated poor performance that was inferior to its human counterparts in medical examinations and low reliability and precision in answering medical questions.