Abstract
OBJECTIVE: To compare the quality and comprehensiveness of information on peripherally inserted central catheters (PICCs) provided by Google and the artificial intelligence (AI) tool ChatGPT, and to identify the implications for patient understanding, informed decision-making, and potential adherence in oncology care. METHODS: In a simulated study, the top 20 PICC-related frequently asked questions (FAQs) were identified via a standardized Google search. These questions were posed to both platforms, and the responses were systematically analyzed and compared for source, type, and content. RESULTS: Google's answers were fragmented and sourced mainly from government websites (45%). In contrast, ChatGPT provided comprehensive, synthesized responses, primarily from academic sources (70%), as inferred from the content, given its lack of explicit source attribution. Critically, significant discrepancies in key clinical information were found. For instance, Google's top answer for PICC longevity was "two to six weeks", while ChatGPT suggested "up to six months or more", creating a high potential for patient confusion and undermining trust in prescribed care plans. CONCLUSION: ChatGPT has the potential to offer more integrated health information than traditional search engines, thereby influencing how patients access knowledge. However, the presence of conflicting and decontextualized information introduces significant risks, such as patient confusion and anxiety, which can negatively impact trust, shared decision-making, and adherence to medical advice.