Decoding trust in large language models for healthcare in Saudi Arabia

解码沙特阿拉伯医疗保健领域大型语言模型中的信任

阅读:1

Abstract

This study investigates the factors influencing user trust and decision-making when using Artificial Intelligence (AI) systems, specifically focusing on ChatGPT in the healthcare domain within the Saudi context. As AI-powered conversational agents are increasingly utilized for medical advice, symptom assessment, and healthcare decision support, understanding user trust and adoption behavior is critical. Leveraging constructs from trust in technology, the Technology Acceptance Model (TAM), the Health Belief Model (HBM), and usability frameworks, the study utilizes Partial Least Squares Structural Equation Modeling (PLS-SEM) to analyze relationships among competence, reliability, transparency, security, trustworthiness, persuasiveness, and user satisfaction. The findings highlight the significant role of reliability, security, and transparency in building trust and supporting decision-making with ChatGPT in healthcare applications. Notably, out of the 15 tested hypotheses, 10 were supported, reinforcing the critical importance of trust and satisfaction in AI adoption for health-related interactions. The research contributes to understanding cultural influences on AI adoption in Saudi Arabia's healthcare sector and offers practical recommendations for enhancing the trustworthiness and effectiveness of large language models (LLMs) like ChatGPT in medical consultations. These insights are vital for developing responsible AI practices and ensuring ethical deployment of AI-powered tools in healthcare settings, ultimately fostering user confidence in AI-assisted medical decision-making.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。