Abstract
OBJECTIVES: To evaluate the performance of retrieval-augmented generation (RAG), in-context learning (ICL) and majority voting-enhanced large language models (LLMs) on prosthodontic questions. METHODS: Two LLMs, OpenAI o1 and DeepSeek-R1, were used as base models in this study. The base models were enhanced using RAG, ICL and majority voting. The resulting enhanced LLMs-enhanced DeepSeek-R1 and enhanced OpenAI o1-were evaluated alongside their original counterparts. Standardised Chinese-language and English-language prosthodontics questions were used to evaluate the performance of the base and enhanced LLMs. The correctness of each answer and the error types were recorded. χ(2) tests were used to compare differences among the 4 LLMs (α = 0.05). RESULTS: Enhanced versions of both DeepSeek-R1 and OpenAI o1 achieved significant accuracy improvement compared with their base versions in Chinese-language multiple-choice questions (C-MCQs) (P < .001). In English-language multiple-choice questions (E-MCQs), both enhanced versions also exhibited higher accuracy, but the difference was not statistically significant (P = .145). In error analysis, enhanced LLMs significantly reduced knowledge-based errors in C-MCQs (P < .008) but showed only a non-significant reduction in E-MCQs (P = .604). The results reveal that the significant improvement gains from model enhancement were confined to C-MCQs and were not observed in E-MCQs. CONCLUSIONS: The LLMs enhanced using inference-time strategies including RAG, ICL and majority voting show significantly improved accuracy in answering C-MCQs. Although this approach effectively mitigates errors, our analysis reveals a lack of consistent statistically significant improvements in E-MCQs. CLINICAL SIGNIFICANCE: This study illustrates the performance of enhanced LLMs in analysing prosthodontics questions with good accuracy and potential in dentistry education. This implies that enhanced LLMs may be more suitable than non-enhanced ones for processing dental-related tasks.