Abstract
This study investigates the factors influencing user trust and decision-making when using Artificial Intelligence (AI) systems, specifically focusing on ChatGPT in the healthcare domain within the Saudi context. As AI-powered conversational agents are increasingly utilized for medical advice, symptom assessment, and healthcare decision support, understanding user trust and adoption behavior is critical. Leveraging constructs from trust in technology, the Technology Acceptance Model (TAM), the Health Belief Model (HBM), and usability frameworks, the study utilizes Partial Least Squares Structural Equation Modeling (PLS-SEM) to analyze relationships among competence, reliability, transparency, security, trustworthiness, persuasiveness, and user satisfaction. The findings highlight the significant role of reliability, security, and transparency in building trust and supporting decision-making with ChatGPT in healthcare applications. Notably, out of the 15 tested hypotheses, 10 were supported, reinforcing the critical importance of trust and satisfaction in AI adoption for health-related interactions. The research contributes to understanding cultural influences on AI adoption in Saudi Arabia's healthcare sector and offers practical recommendations for enhancing the trustworthiness and effectiveness of large language models (LLMs) like ChatGPT in medical consultations. These insights are vital for developing responsible AI practices and ensuring ethical deployment of AI-powered tools in healthcare settings, ultimately fostering user confidence in AI-assisted medical decision-making.