Scalable Empirical Bayes Inference and Bayesian Sensitivity Analysis.

阅读:4
作者:Doss Hani, Linero Antonio
Consider a Bayesian setup in which we observe Y , whose distribution depends on a parameter θ , that is, Y∣θ ~ πY∣θ . The parameter θ is unknown and treated as random, and a prior distribution chosen from some parametric family (πθ( ⋠ ; h), h ∈ ℋ) , is to be placed on it. For the subjective Bayesian there is a single prior in the family which represents his or her beliefs about θ , but determination of this prior is very often extremely difficult. In the empirical Bayes approach, the latent distribution on θ is estimated from the data. This is usually done by choosing the value of the hyperparameter h that maximizes some criterion. Arguably the most common way of doing this is to let m(h) be the marginal likelihood of h , that is, m(h) = ∫πY ∣ θvh(θ) dθ , and choose the value of h that maximizes m( ⋠ ) . Unfortunately, except for a handful of textbook examples, analytic evaluation of arg maxh m(h) is not feasible. The purpose of this paper is two-fold. First, we review the literature on estimating it and find that the most commonly used procedures are either potentially highly inaccurate or don't scale well with the dimension of h , the dimension of θ , or both. Second, we present a method for estimating arg maxh m(h) , based on Markov chain Monte Carlo, that applies very generally and scales well with dimension. Let g be a real-valued function of θ , and let I(h) be the posterior expectation of g(θ) when the prior is vh . As a byproduct of our approach, we show how to obtain point estimates and globally-valid confidence bands for the family I(h) , h ∈ ℋ . To illustrate the scope of our methodology we provide three detailed examples, having different characters.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。