论文标题
深度学习中内域的不确定性估计和结合的陷阱
Pitfalls of In-Domain Uncertainty Estimation and Ensembling in Deep Learning
论文作者
论文摘要
不确定性估计和结合方法齐头并进。不确定性估计是评估结合性能的主要基准之一。同时,深度学习合奏提供了最新的结果,从而导致了不确定性估计。在这项工作中,我们专注于图像分类的内域不确定性。我们探讨了其量化标准,并指出了现有指标的陷阱。避免了这些陷阱,我们对不同的结合技术进行了广泛的研究。为了在这项研究中提供更多的见解,我们介绍了深度集合等效分数(DEE),并表明许多复杂的结合技术等同于仅在测试性能方面仅由少数受过独立训练的网络组成的集成。
Uncertainty estimation and ensembling methods go hand-in-hand. Uncertainty estimation is one of the main benchmarks for assessment of ensembling performance. At the same time, deep learning ensembles have provided state-of-the-art results in uncertainty estimation. In this work, we focus on in-domain uncertainty for image classification. We explore the standards for its quantification and point out pitfalls of existing metrics. Avoiding these pitfalls, we perform a broad study of different ensembling techniques. To provide more insight in this study, we introduce the deep ensemble equivalent score (DEE) and show that many sophisticated ensembling techniques are equivalent to an ensemble of only few independently trained networks in terms of test performance.