Abstract
INTRODUCTION: Accurate and up-to-date educational resources are vital for medical professionals treating pneumonia to ensure alignment with evolving clinical guidelines, improve diagnostic precision, and support effective, evidence-based care that enhances patient outcomes. METHODS: Readability of the content was compared and assessed using the Flesch-Kincaid Reading Ease and Grade Level metrics via an online calculator, evaluating parameters such as word and sentence counts, average sentence length, and proportion of difficult words. Data were compiled in Excel (Microsoft Corp., Redmond, WA, USA), and statistical tools were used for analysis, with significance set at a p-value < 0.05. RESULTS: In this limited-scope study, ChatGPT (OpenAI, San Francisco, CA, USA)-generated content on pneumonia was found to be significantly shorter and denser than UpToDate, with more complex vocabulary, though both sources showed comparable readability scores across standard metrics. CONCLUSION: This suggests ChatGPT may offer quicker, more accessible summaries, while UpToDate provides more balanced, clinically grounded content, highlighting the potential of a combined approach for effective medical education. The clinical accuracy of the AI-generated content was not reviewed by human experts, thus underlining the need for broader studies across diverse clinical topics with multiple reviewers.