Abstract
Background Artificial intelligence (AI) chatbots are increasingly used in healthcare for information dissemination and clinical decision support. However, their reliability and applicability in subspecialties such as ocular oncology remain largely unassessed. This study aimed to evaluate the accuracy, completeness, readability, and real-world utility of three prominent AI chatbots, ChatGPT-4o (OpenAI, San Francisco, California, USA), DeepSeek v3 (DeepSeek, Hangzhou, Zhejiang, China), and Gemini 2.0 (Google DeepMind, London, UK), in responding to clinically relevant questions related to ocular malignancies. Methods A cross-sectional observational study was conducted at a tertiary eye care institute in Northern India. Five clinical questions, covering key ocular oncologic conditions, were created and standardized by ocular oncology experts. These prompts were input into ChatGPT-4o, DeepSeek v3, and Gemini 2.0. Responses were independently evaluated using a structured proforma assessing correctness, completeness, readability (Flesch-Kincaid score, word count, sentence count), presence of irrelevant data, applicability in the Indian healthcare setting, and reliability. Data were analyzed using Kruskal-Wallis and ANOVA statistical tests. Results All three chatbots demonstrated comparable correctness scores (mean 3.4, SD 0.49). However, four out of five responses from each chatbot were deemed incomplete. DeepSeek v3 provided the most verbose and readable answers (mean 533.8 words; Flesch score 38.0), while ChatGPT-4o generated the shortest but more clinically reliable responses (mean reliability 3.2). Gemini 2.0 exhibited the greatest variability in length and structure. No irrelevant content was observed in any chatbot responses. Only 2/5 responses from ChatGPT-4o and 1/5 from each of the other two were directly applicable to Indian clinical practice. Conclusion While AI chatbots can offer factually accurate responses to ocular oncology-related queries, they often fall short in completeness and clinical applicability. ChatGPT-4o showed the most balanced performance, though regional customization and expert oversight remain essential. Current models are not yet suitable for unsupervised use in high-stakes clinical scenarios.