Optimal Information Collection Policies in a Markov Decision Process Framework

马尔可夫决策过程框架下的最优信息收集策略

阅读:1

Abstract

BACKGROUND: The cost-effectiveness and value of additional information about a health technology or program may change over time because of trends affecting patient cohorts and/or the intervention. Delaying information collection even for parameters that do not change over time may be optimal. METHODS: We present a stochastic dynamic programming approach to simultaneously identify the optimal intervention and information collection policies. We use our framework to evaluate birth cohort hepatitis C virus (HCV) screening. We focus on how the presence of a time-varying parameter (HCV prevalence) affects the optimal information collection policy for a parameter assumed constant across birth cohorts: liver fibrosis stage distribution for screen-detected diagnosis at age 50. RESULTS: We prove that it may be optimal to delay information collection until a time when the information more immediately affects decision making. For the example of HCV screening, given initial beliefs, the optimal policy (at 2010) was to continue screening and collect information about the distribution of liver fibrosis at screen-detected diagnosis in 12 years, increasing the expected incremental net monetary benefit (INMB) by $169.5 million compared to current guidelines. CONCLUSIONS: The option to delay information collection until the information is sufficiently likely to influence decisions can increase efficiency. A dynamic programming framework enables an assessment of the marginal value of information and determines the optimal policy, including when and how much information to collect.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。