Abstract
This study aims to evaluate and compare the quality and comprehensibility of responses generated by 5 artificial intelligence chatbots - ChatGPT-4, Claude, Mistral, Grok, and Google PaLM - to the most frequently asked questions about uveitis. Google Trends was employed to identify significant phrases associated with uveitis. Each artificial intelligence chatbot was provided with a unique sequence of 25 frequently searched terms as input. The responses were evaluated using 3 distinct tools: The Patient Education Materials Assessment Tool for Printable Materials (PEMAT-P), the Simple Measure of Gobbledygook (SMOG) index, and the Automated Readability Index (ARI). The 3 most frequently searched terms were "uveitis eye," "anterior uveitis," and "uveitis symptoms." Among the chatbots evaluated, GPT-4 demonstrated the lowest ARI and SMOG scores (P = .001). Regarding the PEMAT-P, Mistral scored the lowest in understandability, while Grok achieved the highest score for actionability (P < .001). All chatbots, except Mistral, exhibited high intelligibility scores. GPT-4 had the lowest SMOG and ARI score among the chatbots evaluated, making it the easiest to read. Chatbot technology holds significant potential to enhance healthcare information dissemination and facilitate better patient understanding. While chatbots can effectively provide information on health topics such as uveitis, further improvement is needed to maximize their efficacy and accessibility.