论文标题

在反问题中对对数似然梯度的无偏估计

Unbiased Estimation of the Gradient of the Log-Likelihood in Inverse Problems

论文作者

Jasra, Ajay, Law, Kody J. H., Lu, Deng

论文摘要

我们考虑估计与贝叶斯逆问题相关的参数的问题。将未知的初始条件视为滋扰参数,通常必须求助于对数可能的梯度的数值近似,并且还必须在空间和/或时间中对问题的离散化。我们开发了一种新的方法,可以公正地估计相对于未知参数的对数似然性的梯度,即对估计值的期望没有离散化偏差。这种属性不仅可用于原始随机模型的估计,而且可用于随机梯度算法中,这些算法受益于无偏估计。在适当的假设下,我们证明我们的估计器不仅是公正的,而且是有限的差异。此外,当在单个处理器上实施时,我们表明实现给定误差水平的成本与实际上和理论上的多级蒙特卡洛方法相当。但是,新算法为任意许多处理器进行并行计算提供了可能性,而不会渐近,而没有任何效率损失。实际上,这意味着只要有足够的处理器可用,就可以在有限的恒定时间内实现任何精度。

We consider the problem of estimating a parameter associated to a Bayesian inverse problem. Treating the unknown initial condition as a nuisance parameter, typically one must resort to a numerical approximation of gradient of the log-likelihood and also adopt a discretization of the problem in space and/or time. We develop a new methodology to unbiasedly estimate the gradient of the log-likelihood with respect to the unknown parameter, i.e. the expectation of the estimate has no discretization bias. Such a property is not only useful for estimation in terms of the original stochastic model of interest, but can be used in stochastic gradient algorithms which benefit from unbiased estimates. Under appropriate assumptions, we prove that our estimator is not only unbiased but of finite variance. In addition, when implemented on a single processor, we show that the cost to achieve a given level of error is comparable to multilevel Monte Carlo methods, both practically and theoretically. However, the new algorithm provides the possibility for parallel computation on arbitrarily many processors without any loss of efficiency, asymptotically. In practice, this means any precision can be achieved in a fixed, finite constant time, provided that enough processors are available.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源