Calibration of AI large language models with human subject matter experts for grading of clinical short-answer responses in dental education

利用人类学科专家对人工智能大型语言模型进行校准,以用于牙科教育中临床简答题的评分。

阅读:2

Abstract

BACKGROUND: The automated grading of clinical short-answer questions using large language models (LLMs) could alleviate faculty workload and improve the immediacy of feedback in dental education. However, evidence on the capacity of LLMs for rubric-based grading in dentistry remains limited. Therefore, this study aimed to compare the grading reliability and error patterns of two LLMs, ChatGPT-4 and the open-weight DeepSeek-3, against expert human evaluators. MATERIALS AND METHODS: In a retrospective cross-sectional study with comparative validation design, we analyzed 2,358 short-answer responses from 262 undergraduate dental students (across nine clinical questions). All responses were analyzed, then human-graded by three calibrated subject-matter experts (SME) (intraclass correlation coefficient [ICC] = 0.84) to provide a reference. Each LLM was provided a 12-point analytic rubric to guide the grading, but was not provided any prior examples of the grading task (i.e., a zero-shot prompt). We assessed agreement using ICC, Pearson correlation, Cohen’s kappa, and mixed-effects models, and examined error tiers (≤ 1, 2–3, > 3 points) across Bloom’s levels and response styles. RESULTS: In this dataset, DeepSeek-3 obtained an ICC of 0.87 compared with ChatGPT-4 which obtained an ICC of 0.64. DeepSeek-3 matched exactly with human scores in 43.3% of cases and was within ± 1 point in 62.4%, compared with 35.5% and 44.1% for ChatGPT-4. High-error rates (> 3 points) were 7.5% for DeepSeek-3 vs. 26.9% for ChatGPT-4 (χ², p < 0.01). DeepSeek-3’s agreement was consistent across cognitive levels and response verbosity, while ChatGPT-4’s accuracy on higher-level and verbose responses was significantly lower (p < 0.01). Both models exhibited an optimistic bias by over-scoring incorrect answers. CONCLUSIONS: DeepSeek-3 showed fewer large-magnitude errors and better agreement with human graders compared to ChatGPT-4, suggesting its potential value for large-scale AI-assisted assessment for dental education. Since both models can over-score on incorrect results, human-in-the-loop oversight is necessary for high-stakes applications. Further work should evaluate performance across more courses, institutions, and languages, as well as examine the effects of model calibration, bias reduction, and external validation before considering the broader integration of LLMs into dental education. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s12903-026-07665-4.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。