Abstract
BACKGROUND: Neonatal Home Oxygen Therapy (NHOT) is a critical treatment for premature infants with Bronchopulmonary Dysplasia (BPD). However, existing health education materials are generally difficult to read, particularly for grandparent caregivers with lower educational backgrounds. This study aimed to systematically evaluate the capacity of six major Large Language Models (LLMs) to generate and optimize NHOT health education materials. METHODS: Six LLMs were included: ChatGPT-5.1, Claude 4.5 Sonnet, Gemini 2.5 Pro, Grok-4.1, Qwen-3-Max, and DeepSeek-V3.2. Each model generated 20 texts under three prompting strategies-baseline (Prompt A), simplification (Prompt B), and rewriting (Prompt C)-yielding 360 texts in total. Twenty WeChat public health articles served as the human-authored baseline. Subjective evaluation employed C-DISCERN, C-PEMAT (understandability and actionability), and a medical accuracy Likert scale, supplemented by objective linguistic analysis using the Alpha Readability Chinese (ARC) tool. RESULTS: All models demonstrated superior medical accuracy compared to the human baseline (Likert median 1.0, against 2.0 for the original articles). Under baseline conditions, Qwen achieved the highest content quality (C-DISCERN median 57.0), while Claude attained perfect actionability scores. The simplification prompt (Prompt B) significantly reduced C-DISCERN scores across all models (all p < 0.001) without meaningfully improving understandability or actionability. In the rewriting task (Prompt C), all models significantly enhanced the understandability of original texts (p < 0.01), with Grok and Qwen additionally improving content quality and actionability. Linguistic analysis revealed that prompt optimization improved semantic accuracy and reduced semantic noise, but at the cost of decreased lexical richness. CONCLUSION: LLMs demonstrate significant potential for optimizing existing health education materials, performing more reliably in rewriting mode than in de novo generation. Simplistic "plain language" instructions risk compromising content quality, highlighting the need for carefully designed prompts that balance accuracy, clarity, and completeness. All AI-generated materials require rigorous review by qualified clinical professionals prior to distribution.