Large language models versus traditional textbooks: optimizing learning for plastic surgery case preparation

大型语言模型与传统教科书:优化整形外科病例准备学习

阅读:1

Abstract

BACKGROUND: Large language models (LLMs), such as ChatGPT-4 and Gemini, represent a new frontier in surgical education by offering dynamic, interactive learning experiences. Despite their potential, concerns about the accuracy, depth of knowledge, and bias in LLM responses persist. This study evaluates the effectiveness of LLMs in aiding surgical trainees in plastic and reconstructive surgery through comparison with traditional case-preparation textbooks. METHODS: Six representative cases from key areas of plastic and reconstructive surgery-craniofacial, hand, microsurgery, burn, gender-affirming, and aesthetics-were selected. Four types of questions were developed for each case to cover clinical anatomy, indications, contraindications, and complications. Responses from LLMs (ChatGPT-4 and Gemini) and textbooks were compared using surveys distributed to medical students, research fellows, residents, and attending surgeons. Reviewers rated each response on accuracy, thoroughness, usefulness for case preparation, brevity, and overall quality using a 5-point Likert scale. Statistical analyses, including ANOVA and unpaired T-tests, were conducted to assess the differences between LLM and textbook responses. RESULTS: A total of 90 surveys were completed. LLM responses were rated as more thorough (p < 0.001) but less concise (p < 0.001) than textbook responses. Textbooks were rated superior for answering questions on contraindications (p = 0.027) and complications (p = 0.014). ChatGPT was perceived as more accurate (p = 0.018), thorough (p = 0.002), and useful (p = 0.026) than Gemini. Gemini was rated lower in quality (p = 0.30) compared to ChatGPT along with being inferior to textbook answers for burn-related questions (p = 0.017) and anatomical questions (p = 0.013). CONCLUSION: While LLMs show promise in generating thorough educational content, they require improvement in conciseness, accuracy, and utility for practical case preparation. ChatGPT generally outperforms Gemini, indicating variability in LLM capabilities. Further development should focus on enhancing accuracy and consistency to establish LLMs as reliable tools in medical education and practice.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。