Using large language models to suggest informative prior distributions in Bayesian regression analysis

在贝叶斯回归分析中使用大型语言模型来建议信息丰富的先验分布

阅读:1

Abstract

Selecting prior distributions in Bayesian regression analysis is a challenging task. Even if knowledge already exists, gathering this information and translating it into informative prior distributions is both resource-demanding and difficult to perform objectively. In this paper, we analyze the idea of using large-language models (LLMs) to suggest suitable prior distributions. The substantial amount of information absorbed by LLMs gives them a potential for suggesting knowledge-based and more objective informative priors. We have developed an extensive prompt to not only ask LLMs to suggest suitable prior distributions based on their knowledge but also to verify and reflect on their choices. We evaluated the three popular LLMs Claude Opus, Gemini 2.5 pro, and ChatGPT 4o-mini for two different real datasets: an analysis of heart disease risk and an analysis of variables affecting the strength of concrete. For all the variables, the LLMs were capable of suggesting the correct direction for different associations, e.g., that the risk of heart disease is higher for males than females or that the strength of concrete is reduced with the amount of water added. The LLMs suggested both moderately and weakly informative priors, and the moderate priors were in many cases too confident, resulting in prior distributions with little agreement with the data. The quality of the suggested prior distributions was measured by computing the distance to the distribution of the maximum likelihood estimator ("data distribution") using the Kullback-Leibler divergence. In both experiments, Claude and Gemini provided better prior distributions than ChatGPT. For weakly informative priors, ChatGPT and Gemini defaulted to a mean of 0, which was "unnecessarily vague" given their demonstrated knowledge. In contrast, Claude did not. This is a significant performance difference and a key advantage for Claude's approach. The ability of LLMs to suggest the correct direction for different associations demonstrates a great potential for LLMs as an efficient and objective method to develop informative prior distributions. However, a significant challenge remains in calibrating the width of these priors, as the LLMs demonstrated a tendency towards both overconfidence and underconfidence. Our code is available at https://github.com/hugohammer/LLM-priors .

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。