Evaluating the o1 reasoning large language model for cognitive bias: a vignette study

评估 o1 推理大型语言模型对认知偏差的影响:一项情景研究

阅读:1

Abstract

BACKGROUND: Cognitive biases, systematic deviations from logical judgment, are well documented in clinical decision-making, particularly in clinical settings characterized by high decision load, limited time, and diagnostic uncertainty-such as critical care. Prior work demonstrated that large language models, particularly GPT-4, reproduce many of these biases, sometimes to a greater extent than human clinicians. METHODS: We tested whether the o1 model (o1-2024-12-17), a newly released AI system with enhanced reasoning capabilities, is susceptible to cognitive biases that commonly affect medical decision-making. Following the methodology established by Wang and Redelmeier [15], we used ten pairs of clinical scenarios, each designed to test a specific cognitive bias known to influence clinicians. Each scenario had two versions, differed by subtle modifications designed to trigger the bias (such as presenting mortality rates versus survival rates). The o1 model generated 90 independent clinical recommendations for each scenario version, totalling 1,800 responses. We measured cognitive bias as systematic differences in recommendation rates between the paired scenarios, which should not occur with unbiased reasoning. The o1 model's performance was compared against previously published results from both the GPT-4 model and historical human clinician studies. RESULTS: The o1 model showed no measurable cognitive bias in seven of the ten vignettes. In two vignettes, the o1 model showed significant bias, but its absolute magnitude was lower than values previously reported for GPT-4 and human clinicians. In a single vignette, Occam's razor, the o1 model exhibited consistent bias. Therefore, although overall bias appears less frequent overall with the reasoning model than with GPT-4, it was worse in one vignette. The model was more prone to bias in vignettes that included a gap-closing cue, seemingly resolving the clinical uncertainty. Across eight vignette versions, intra‑scenario agreement exceeded 94%, indicating lower decision variability than previously described with GPT‑4 and human clinicians. CONCLUSION: Reasoning models may reduce cognitive bias and random variation in judgment (i.e., "noise"). However, our findings caution that reasoning models are still not entirely immune to cognitive bias. These findings suggest that reasoning models may impart some benefits as decision-support tools in medicine, but they also imply a need to explore further the circumstances in which these tools may fail.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。