Inaccurate information regarding cardiovascular disease prevention enabled by generative artificial intelligence

生成式人工智能提供的有关心血管疾病预防的信息不准确

阅读:2

Abstract

Inaccurate information regarding cardiovascular disease (CVD) prevention is prevalent on the internet and may influence medical decisions. Artificial intelligence "bots" are present on the internet and may be used for medical questions. This physician-led experiment evaluated the generation of inaccurate CVD information on two widely used generative artificial intelligence (genAI) models, namely OpenAI o1 and DeepSeek-R1. Performed in February 2025, this experiment was designed to evaluate genAI responses regarding nine commonly relevant CVD prevention topics, including statin therapy, supplements, and LDL cholesterol. Prompts were devised in two "tones", termed a neutral tone prompt and an inaccuracy tone prompt, the latter of which specifically requested inaccurate information. Two board-certified preventive cardiologists graded responses as appropriate, borderline, or inappropriate based on content and references. For the nine neutral tone prompts, 88.9 % (8/9) of OpenAI o1's responses and 66.7 % (6/9) of DeepSeek R1's were graded as appropriate. For the inaccuracy tone prompts, OpenAI o1 produced no appropriate responses (0/9), with 22.2 % (2/9) graded as borderline and 77.8 % (7/9) inappropriate. All of DeepSeek R1's replies (9/9) were graded as inappropriate. Findings highlight the relative ease with which genAI models can be prompted to produce inaccurate information on CVD prevention topics that are highly relevant to public health. Findings underscore the need for further research and policy interventions to mitigate AI-driven informational risks.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。