论文标题
退出罢工:通过多样性采样改善了不确定性估计
Dropout Strikes Back: Improved Uncertainty Estimation via Diversity Sampling
论文作者
论文摘要
在许多情况下,机器学习模型的不确定性估计非常重要,例如构建模型预测的置信区间和检测分布或对抗生成的点。在这项工作中,我们表明,修改神经网络中辍学层的采样分布可提高不确定性估计的质量。我们的主要思想包括两个主要步骤:计算神经元之间数据驱动的相关性和生成样本,其中包括最大化的神经元。在一系列有关模拟和现实世界数据的实验中,我们证明了通过确定点过程的基于确定点过程的多样化实现最新的最新结果,从而导致回归和分类任务的不确定性估计。我们方法的一个重要特征是,它不需要对模型或培训程序进行任何修改,从而可以直接地应用与辍学层的任何深度学习模型。
Uncertainty estimation for machine learning models is of high importance in many scenarios such as constructing the confidence intervals for model predictions and detection of out-of-distribution or adversarially generated points. In this work, we show that modifying the sampling distributions for dropout layers in neural networks improves the quality of uncertainty estimation. Our main idea consists of two main steps: computing data-driven correlations between neurons and generating samples, which include maximally diverse neurons. In a series of experiments on simulated and real-world data, we demonstrate that the diversification via determinantal point processes-based sampling achieves state-of-the-art results in uncertainty estimation for regression and classification tasks. An important feature of our approach is that it does not require any modification to the models or training procedures, allowing straightforward application to any deep learning model with dropout layers.