Public Perceptions of AI in Medicine and Implications for Future Medical Education: Cross-Sectional Survey

公众对人工智能在医学领域的认知及其对未来医学教育的影响:一项横断面调查

阅读:1

Abstract

BACKGROUND: The integration of artificial intelligence (AI) into clinical practice is contingent on public trust. This trust often depends on physician oversight, yet a significant gap exists between the need for AI-competent physicians and the current state of medical education. While the perspectives of students and experts on this gap are known, the views of the US general public remain largely unquantified. OBJECTIVE: This study aimed to assess US public perceptions regarding AI in medicine and the corresponding emergent needs for medical education. We specifically sought to quantify public trust in different diagnostic scenarios, concerns about physician overreliance on AI, support for mandatory AI education, and priorities for the future focus of medical training. METHODS: We conducted a cross-sectional, web-based survey of adults in the United States in November 2025. Participants (N=524) were recruited via SurveyMonkey Audience. We calculated descriptive statistics, frequencies, proportions (percentages), and 95% CIs for all main survey items. RESULTS: A total of 524 participants completed the survey. Most (n=329, 62.8%; 95% CI 58.6%-66.9%) placed the most trust in a physician's diagnosis based on their expertise alone; only 7.8% (n=41; 95% CI 5.5%-10.1%) trusted an AI-first diagnostic model. Trust was highly contingent on training: 93.9% (n=492) of participants rated formal physician training on AI limitations as "essential" or "very important." Widespread concern about physician overreliance on AI was reported, with 81.1% (n=425) being "very concerned" or "extremely concerned." Consequently, 85.1% (n=446) agreed or strongly agreed that training on AI use, ethics, and limitations should be mandatory in medical school. When asked about future educational priorities, 70.2% (n=368; 95% CI 66.3%-74.1%) believed that medical education should focus on human-centered skills (eg, empathy and communication) over clinical skills. CONCLUSIONS: The US public expressed conditional trust in medical AI, strongly preferring physician-led and critically supervised models. These findings reveal a clear public mandate for medical education reform. The public expects future physicians to be mandatorily trained to appraise AI, understand its limitations, and refocus their professional development on the human-centered skills that technology cannot replace.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。