Demographic biases in AI-generated simulated patient cohorts: a comparative analysis against census benchmarks

人工智能生成的模拟患者队列中的人口统计学偏差:与人口普查基准的比较分析

阅读:1

Abstract

BACKGROUND: Generative artificial intelligence models are being introduced as low-cost tools for creating simulated patient cohorts in undergraduate medical education. Their educational value, however, depends on the extent to which the synthetic populations mirror real-world demographic diversity. We therefore assessed whether two commonly deployed large language models produce patient profiles that reflect the current age, sex, and ethnic composition of the UK. METHODS: GPT-3.5-turbo-0125 and GPT-4-mini-2024-07-18 were each prompted, without demographic steering, to generate 250 UK-based 'patients'. Age was returned directly by the model; sex and ethnicity were inferred from given and family names using a validated census-derived classifier. Observed frequencies for each demographic variable were compared with England and Wales 2021 census expectations by chi-square goodness-of-fit tests. RESULTS: Both cohorts diverged significantly from census benchmarks (p < 0.0001 for every variable). Age distributions showed an absence of very young and older individuals, with certain middle-aged groups overrepresented (GPT-3.5: χ2(17) = 1310.4, p < 0.0001; GPT4mini: χ2(17) = 1866.1, p < 0.0001). Neither model produced patients younger than 25 years; GPT-3.5 generated no one older than 47 years and GPT-4-mini no one older than 56 years. Gender proportions also differed markedly, skewing heavily toward males (GPT-3.5: χ2(1) = 23.84, p < 0.0001; GPT4mini: χ2(1) = 191.7, p < 0.0001). Male patients constituted 64.7% and 92.8% of the two cohorts. Name diversity was limited: GPT-3.5 yielded 104 unique first-last-name combinations, whereas GPT-4-mini produced only nine. Ethnic profiles were similarly imbalanced, featuring overrepresentation of some groups and complete absence of others (χ2(10) = 42.19, p < 0.0001). CONCLUSIONS: In their default state, the evaluated models create synthetic patient pools that exclude younger, older, female and most minority-ethnic representations. Such demographically narrow outputs threaten to normalise biased clinical expectations and may undermine efforts to prepare students for equitable practice. Baseline auditing of model behaviour is therefore essential, providing a benchmark against which prompt-engineering or data-curation strategies can be evaluated before generative systems are integrated into formal curricula.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。