Abstract
Large Language Models (LLMs) frequently generate patient education materials that exceed recommended reading levels.Prompting LLMs to produce PEMs at a 5th-grade level consistently produced statistically lower readability scores than unprompted outputs.These findings suggest that simple prompt engineering can improve clarity and accessibility of LLM-generated PEMs.