论文标题
分位数回归神经网络:贝叶斯方法
Quantile Regression Neural Networks: A Bayesian Approach
论文作者
论文摘要
本文介绍了一种贝叶斯神经网络估计方法,用于分位数回归,假设响应变量的不对称拉环分布(ALD)。结果表明,在错误指定的ALD模型下,前馈神经网络回归的后验分布在渐近一致。这种一致性证明从密度估计域中嵌入了问题,并在括号熵上使用边界来得出Hellinger社区的后验一致性。该一致性结果显示在隐藏节点数量随样本量增长的设置中。贝叶斯实施利用ALD密度的正常指数混合物表示。该算法使用马尔可夫链蒙特卡洛(MCMC)模拟技术 - 吉布斯采样与大都会杂货算法结合。我们已经解决了与上述MCMC实施相关的复杂性问题,在链收敛,起始值的选择和步进尺寸的背景下。我们已经用仿真研究和实际数据示例说明了提出的方法。
This article introduces a Bayesian neural network estimation method for quantile regression assuming an asymmetric Laplace distribution (ALD) for the response variable. It is shown that the posterior distribution for feedforward neural network quantile regression is asymptotically consistent under a misspecified ALD model. This consistency proof embeds the problem from density estimation domain and uses bounds on the bracketing entropy to derive the posterior consistency over Hellinger neighborhoods. This consistency result is shown in the setting where the number of hidden nodes grow with the sample size. The Bayesian implementation utilizes the normal-exponential mixture representation of the ALD density. The algorithm uses Markov chain Monte Carlo (MCMC) simulation technique - Gibbs sampling coupled with Metropolis-Hastings algorithm. We have addressed the issue of complexity associated with the afore-mentioned MCMC implementation in the context of chain convergence, choice of starting values, and step sizes. We have illustrated the proposed method with simulation studies and real data examples.