论文标题

通过有条件生成模型评估不确定性

Evaluating Aleatoric Uncertainty via Conditional Generative Models

论文作者

Huang, Ziyi, Lam, Henry, Zhang, Haofeng

论文摘要

Aleatoric不确定性量化寻求对随机响应的分配知识,这对于机器学习应用中的可靠性分析和鲁棒性改善非常重要。先前关于息肉不确定性估计的研究主要针对封闭形成的条件密度或方差,这需要对数据分布或维度的强大限制。为了克服这些限制,我们研究了有条件的生成模型,以估计不确定性。我们介绍了两个指标,以测量适合这些模型的两个条件分布之间的差异。这两个指标都可以通过蒙特卡洛模拟对条件生成模型进行易于计算,从而促进其评估和培训。我们以数字方式证明了我们的指标如何提供有条件分布差异的正确测量,并可用于训练有条件的模型与现有基准有竞争力。

Aleatoric uncertainty quantification seeks for distributional knowledge of random responses, which is important for reliability analysis and robustness improvement in machine learning applications. Previous research on aleatoric uncertainty estimation mainly targets closed-formed conditional densities or variances, which requires strong restrictions on the data distribution or dimensionality. To overcome these restrictions, we study conditional generative models for aleatoric uncertainty estimation. We introduce two metrics to measure the discrepancy between two conditional distributions that suit these models. Both metrics can be easily and unbiasedly computed via Monte Carlo simulation of the conditional generative models, thus facilitating their evaluation and training. We demonstrate numerically how our metrics provide correct measurements of conditional distributional discrepancies and can be used to train conditional models competitive against existing benchmarks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源