Abstract
BACKGROUND: The advent of LLM (large language model) has seen extensive application in health information consultation, enabling interactive responses to complex queries; however, their reliability and readability warrant further investigation. This study aims to assess the reliability and readability of cross-disciplinary responses generated by artificial intelligence platforms regarding thunderstorm asthma, including ChatGPT-4, Deepseek-V3.2-V3.2, Perplexity Pro, and Microsoft Copilot. METHODS: This study uses Google Trends to identify and filter topic-specific information on thunderstorm asthma. This study analyses cross-disciplinary responses generated by ChatGPT-4, Deepseek-V3.2, Perplexity Pro, and Microsoft Copilot in response to conversational inputs. The 29 selected responses exhibit varying levels of meteorological forecasting accuracy concerning thunderstorms, as well as prevalent themes related to asthma symptomatology and therapeutic interventions. The study employed reliability assessment tools, including the DISCERN instrument, the Ensuring Quality Information for Patients Scale (EQIP), the JAMA benchmarks, and the Global Quality Scoring (GQS), in conjunction with six authoritative readability metrics-namely, the Automated Readability Index (ARI), Coleman-Liau Grade Level (CL), Flesch-Kincaid Grade Level (FKGL), Flesch Reading Ease Score (FRES), Gunning Fog Index (GFI), and SMOG-to enable a comprehensive evaluation. RESULTS: Research findings indicate statistically significant differences in the reliability of various artificial intelligence programmes when responding to complex interdisciplinary information queries. Microsoft Copilot demonstrates superior performance in terms of information reliability and structural quality, consistently achieving higher scores than ChatGPT-4-4o and Perplexity Pro, thereby providing more dependable information. However, all programme-generated informational responses were excessively complex for the general public, failing to meet sixth-grade reading comprehension standards, as the majority of outputs were written at a secondary education level or higher. CONCLUSION: This study reveals that while LLM demonstrate some reliability in handling complex health consultations, none meet the recommended readability benchmark for a sixth-grade reading level. Future efforts should focus on improving the reliability and readability of LLM generated health information to enhance comprehension amongst broader audiences.