论文标题

对比度学习中不确定性的简单框架

A Simple Framework for Uncertainty in Contrastive Learning

论文作者

Wu, Mike, Goodman, Noah

论文摘要

对比度的表示学习方法最近表现出了巨大的希望。与生成方法相反,这些对比模型学习了一个没有不确定性或信心概念的确定性编码器。在本文中,我们介绍了一种基于“对比分布”的简单方法,该方法学会了为预审预定的对比表示分配不确定性。特别是,我们训练一个深层网络从表示空间中的代表到分布,其方差可以用作置信度的衡量。在我们的实验中,我们表明可以使用这种深度不确定性模型(1)来解释模型行为,(2)在输入中检测到部署模型的新噪声,(3)以检测异常,我们在11个任务中超过了10个基线方法,在11个任务中,最多可将14%的临床效果分类为Overiaster IS竞争者,我们的即将到来的模型竞争了,我们的即将到来的模型竞争了。

Contrastive approaches to representation learning have recently shown great promise. In contrast to generative approaches, these contrastive models learn a deterministic encoder with no notion of uncertainty or confidence. In this paper, we introduce a simple approach based on "contrasting distributions" that learns to assign uncertainty for pretrained contrastive representations. In particular, we train a deep network from a representation to a distribution in representation space, whose variance can be used as a measure of confidence. In our experiments, we show that this deep uncertainty model can be used (1) to visually interpret model behavior, (2) to detect new noise in the input to deployed models, (3) to detect anomalies, where we outperform 10 baseline methods across 11 tasks with improvements of up to 14% absolute, and (4) to classify out-of-distribution examples where our fully unsupervised model is competitive with supervised methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源