Abstract
Background: The rapid integration of artificial intelligence (AI) technologies into healthcare systems presents new opportunities and challenges, particularly regarding legal and ethical implications. In Saudi Arabia, the lack of legal awareness could hinder safe implementation of AI tools. Methods: A sequential explanatory mixed-methods design was employed. In Phase One, a structured electronic survey was administered to 357 clinicians across public and private healthcare institutions in Saudi Arabia, assessing legal awareness, liability concerns, data privacy, and trust in AI. In Phase Two, a qualitative expert panel involving health law specialists, digital health advisors, and clinicians was conducted to interpret survey findings and identify key regulatory needs. Results: Only 7% of clinicians reported high familiarity with AI legal implications, and 89% had no formal legal training. Confidence in AI compliance with data laws was low (mean score: 1.40/3). Statistically significant associations were found between professional role and legal familiarity (χ(2) = 18.6, p < 0.01), and between legal training and confidence in AI compliance (t ≈ 6.1, p < 0.001). Qualitative findings highlighted six core legal barriers including lack of training, unclear liability, and gaps in regulatory alignment with national laws like the Personal Data Protection Law (PDPL). Conclusions: The study highlights a major gap in legal readiness among Saudi clinicians, which affects patient safety, liability, and trust in AI. Although clinicians are open to using AI, unclear regulations pose barriers to safe adoption. Experts call for national legal standards, mandatory training, and informed consent protocols. A clear legal framework and clinician education are crucial for the ethical and effective use of AI in healthcare.