Evaluating AI-generated examination papers in periodontology: a comparative study with human-designed counterparts

评估人工智能生成的牙周病学考试试卷:与人工设计的试卷的比较研究

阅读:1

Abstract

OBJECTIVE: This study systematically evaluates the performance of artificial intelligence (AI)-generated examinations in periodontology education, comparing their quality, student outcomes, and practical applications with those of human-designed examinations. METHODS: A randomized controlled trial was conducted with 126 undergraduate dental students, who were divided into AI (n = 63) and human (n = 63) test groups. The AI-generated examination was developed using GPT-4, while the human examination was derived from the 2024 institutional final exam. Both assessments covered identical content from Periodontology (5th Edition) and included 90 multiple-choice questions (MCQs) across five formats: A1: Single-sentence best choice; A2: Case summary best choice; A3: Case group best choice; A4: Case chain best choice; X: Multiple correct options. Psychometric properties (reliability, validity, difficulty, discrimination) and student feedback were analyzed using split-half reliability, content coverage analysis, factor analysis, and 5-point Likert scales. RESULTS: The AI examination demonstrated superior content coverage (81.3% vs. 72.4%) and significantly higher total scores (79.34 ± 6.93 vs. 73.17 ± 9.57, p = 0.027). However, it showed significantly lower discrimination indices overall (0.35 vs. 0.49, p = 0.004). Both examinations exhibited adequate split-half reliability (AI = 0.81, human = 0.84) and comparable difficulty distributions (AI: easy 40.0%, moderate 46.7%, difficult 13.3%; human: easy 30.0%, moderate 50.0%, difficult 20.0%; p = 0.274). Student feedback revealed significantly lower ratings for the AI test in terms of perceived difficulty appropriateness (3.53 ± 1.03 vs. 4.19 ± 0.76, p < 0.001), knowledge coverage (3.67 ± 0.89 vs. 4.19 ± 0.72, p < 0.001), and learning inspiration (3.79 ± 0.90 vs. 4.25 ± 0.67, p = 0.001). CONCLUSION: While AI-generated examinations improve content breadth and efficiency, their limited clinical contextualization and discrimination constrain their use in high-stakes applications. A hybrid "AI-human collaborative generation" framework, integrating medical knowledge graphs for contextual optimization, is proposed to balance automation with assessment precision. This study provides empirical evidence for the role of AI in enhancing dental education assessment systems.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。