Abstract
BACKGROUND & AIMS: Drug-induced liver injury (DILI) is a complex condition often linked to medication behaviors, with patient education having a crucial role in optimizing outcomes. Large language models (LLMs) could serve as promising tools for scalable patient support, but their utility remains unclear. This study systematically evaluated the capability of six popular open- and closed-source LLMs in addressing common DILI-related queries, focusing on patient-centered education. METHODS: Twenty-eight frequently asked DILI questions were collected with input from hepatologists and patients (n = 15), and categorized into six clinical domains. Responses from six LLMs (GPT-4, GPT-3.5, Claude-2, Claude-1.3, Gemini, and LLaMA-3.1-405B) were anonymized, randomized, and independently evaluated by three hepatologists for accuracy, comprehensiveness, and safety. Additional analyses included automated readability assessment, domain-specific analysis, detailed expert-led error analysis, and direct comparison with physician responses. RESULTS: LLaMA-3.1-405B achieved the highest performance across most domains, with mean accuracy, comprehensiveness, and safety scores of 8.18 ± 1.68, 3.86 ± 0.70, and 4.02 ± 0.84, respectively, significantly surpassing other models (Dunn's post hoc test, all p <0.05). O1-preview ranked second (accuracy, 7.29 ± 1.38; safety, 3.80 ± 0.92), whereas GPT-3.5-Turbo consistently performed worst (accuracy, 4.61 ± 1.17; comprehensiveness, 2.13 ± 0.79). In direct comparison with physicians, both LLaMA-3.1-405B and o1-preview significantly outperformed residents and primary care physicians across all metrics (p <0.05). Error analysis showed that omission of crucial information accounted for 72% of errors, predominantly in GPT-3.5-Turbo, whereas hallucinations were rare (<10%) but notable in LLaMA outputs. CONCLUSION: This study represents the first systematic evaluation of LLMs for DILI-focused patient education. High-performing, publicly accessible LLMs demonstrate the potential to deliver accurate, comprehensive, and safe health information, even surpassing physician responses. IMPACT AND IMPLICATIONS: DILI is a complex and multidisciplinary condition where patient understanding has a crucial role in management outcomes, yet educational resources remain scarce. By systematically evaluating six widely used LLMs, including both open- and closed-source models, this study provides new insights into the potential of artificial intelligence tools to enhance patient education and supplement clinical communication in hepatology. These findings are particularly important for physicians, patient educators, and healthcare policymakers seeking scalable and reliable strategies to support liver disease management. Although further refinement and clinical oversight are necessary to ensure content safety and accuracy, integrating LLM-based tools into patient education initiatives could offer a practical pathway to improve health literacy and engagement in real-world settings.