论文标题

尾部自适应贝叶斯收缩

Tail-adaptive Bayesian shrinkage

论文作者

Lee, Se Yoon, Zhao, Peng, Pati, Debdeep, Mallick, Bani K.

论文摘要

研究了在各种稀疏方案下针对高维回归问题的强大贝叶斯方法。传统的收缩先验主要是为了检测出所谓的Ultra-Sparsity域中成千上万个预测变量的少数信号。但是,当稀疏程度中等时,它们可能不会表现出色。在本文中,我们提出了一种在不同的稀疏状态下的强大稀疏估计方法,该方法具有尾随自适应收缩特性。在此特性中,先前的尾巴舒适性会自适应地调节,随着稀疏度分别增加或降低,以适应或更少的信号,即后验。我们提出了确保该特性的全球本地尾(GLT)高斯混合物分布。我们研究了先前的尾部索引在与潜在的稀疏度有关的作用,并证明GLT后验合同对于稀疏的正常平均模型以最小值的最佳速率以最佳速率。我们在存在真实数据问题和仿真示例之前同时应用GLT先验和马蹄铁。我们的发现表明,基于GLT先验的变化尾巴规则在不同的稀疏度方案中基于马蹄铁的固定尾部规则具有优势。

Robust Bayesian methods for high-dimensional regression problems under diverse sparse regimes are studied. Traditional shrinkage priors are primarily designed to detect a handful of signals from tens of thousands of predictors in the so-called ultra-sparsity domain. However, they may not perform desirably when the degree of sparsity is moderate. In this paper, we propose a robust sparse estimation method under diverse sparsity regimes, which has a tail-adaptive shrinkage property. In this property, the tail-heaviness of the prior adjusts adaptively, becoming larger or smaller as the sparsity level increases or decreases, respectively, to accommodate more or fewer signals, a posteriori. We propose a global-local-tail (GLT) Gaussian mixture distribution that ensures this property. We examine the role of the tail-index of the prior in relation to the underlying sparsity level and demonstrate that the GLT posterior contracts at the minimax optimal rate for sparse normal mean models. We apply both the GLT prior and the Horseshoe prior to a real data problem and simulation examples. Our findings indicate that the varying tail rule based on the GLT prior offers advantages over a fixed tail rule based on the Horseshoe prior in diverse sparsity regimes.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源