Abstract
Introduction Patient education plays a critical role in stroke care and management. It helps patients understand their health, diagnosis, diagnostic modalities, and treatment and improves their overall experience. With the integration of AI tools into healthcare, patient education has become efficient and easily accessible, becoming a powerful asset in healthcare. Methodology In this cross-sectional study, two artificial intelligence (AI) tools, namely, ChatGPT (OpenAI, San Francisco, California, United States) and DeepSeek AI (DeepSeek, Hangzhou, Zhejiang, China), were prompted to create patient education guides on three imaging modalities, that is, digital subtraction angiography (DSA), non-contrast computed tomography (CT), and diffusion-weighted imaging (DWI), for stroke cases. Both responses were assessed for variables such as number of words, number of sentences, average words per sentence, ease score, grade level, and average syllables per word using the Flesch-Kincaid calculator. The readability and similarity scores were assessed by the modified DISCERN score and Quillbot, respectively. Statistical analysis was done using R version 4.3.2 (R Foundation for Statistical Computing, Vienna, Austria). Results In generating patient education materials for non-contrast CT, DW-MRI, and DSA in stroke care, ChatGPT and DeepSeek AI showed similar performance across grade level, ease score, similarity, and reliability, with no statistically significant differences. ChatGPT often produced slightly higher grade levels, while DeepSeek AI had higher ease scores for some modalities. Similarity percentages varied by topic but averaged equally, and reliability was uniformly high. Linguistic features showed only minor, non-significant differences. Conclusions Both ChatGPT and DeepSeek AI performed similarly in generating patient education guides based on ease of understanding and readability. These results suggest that either AI tools can be effectively used for patient education in this context.