Abstract
BACKGROUND: Colorectal cancer (CRC) represents a substantial global burden, particularly in China. Patients have limited awareness of the importance of postoperative follow-up examinations and insufficient knowledge about review, which ultimately leads to poor long-term prognosis. The advent of artificial intelligence tools such as chat-generating pre-trained transformer (ChatGPT) in the healthcare sector is poised to transform patient management strategies. OBJECTIVE: The objective of this study was to evaluate the effectiveness of ChatGPT in responding to patients' inquiries regarding postoperative follow-up of CRC. The overarching objective is to enhance patients' awareness of, and compliance with, postoperative follow-up examinations. METHODS: A set of 10 questions concerning postoperative review of CRC was posed to ChatGPT4.5. The responses to these inquiries were evaluated in three domains (accuracy, completeness, and comprehensibility) by five anorectal specialists in Zhejiang Province, in conjunction with 100 inpatients in three domains (completeness, comprehensibility, and trustability). RESULT: The accuracy scale (scoring from 1 to 6) received an average score of 4.6 ± 0.7. The completeness (scoring from 1 to 3) and comprehensibility (scoring from 1 to 3) scales received average scores of 2.2 ± 0.5 and 2.5 ± 0.5, respectively. Cronbach's α analyses indicated good reliability for the accuracy and completeness scales (α = 0.85) and excellent reliability for the comprehensibility scale (α = 0.93). However, they also suggested possible redundancy issues. Patient feedback was positive, with 98% to 100% rating all questions as completeness, comprehensibility, and trustability. CONCLUSION: ChatGPT has been demonstrated to possess the capacity to formulate satisfactory responses to inquiries concerning CRC postoperative review. Furthermore, it has the potential to enhance patient awareness and knowledge, which may consequently lead to an improvement in long-term outcomes.