论文标题

换成脱钩的变分推断,以进行几个射击分类

Transductive Decoupled Variational Inference for Few-Shot Classification

论文作者

Singh, Anuj, Jamali-Rad, Hadi

论文摘要

从少数样本中学习的多功能性是人类智能的标志。很少有学习能力超越机器的努力。受概率深度学习的承诺和力量的启发,我们提出了一个新颖的变分推理网络,用于几个射击分类(以三叉戟的形式)将图像表示为语义和标记潜在变量,并以交织在一起的方式同时推断它们。为了诱导任务意识,作为三叉戟推理力学的一部分,我们使用一种新型的基于内置的基于注意的转换功能提取模块(我们呼叫attfex)在查询和支持图像上挖掘信息。我们广泛的实验结果证实了三叉戟的功效,并证明,使用最简单的骨干,它在最常见的最常用数据集中的新最先进的是新的最新动物中,以及最多4%和5%的改进,以及最近的挑战性的跨越20%的域名(分别为挑战性的杂货) - > COPES SCEETAIMAIMAIMIMIMIMIMEAGENET(> COPCE) - > COPES SCEETIMAIMIMIMIMIMEAMENET(>)改进)超出了最好的现有跨域基线。可以在我们的GitHub存储库中找到代码和实验:https://github.com/anujinho/trident

The versatility to learn from a handful of samples is the hallmark of human intelligence. Few-shot learning is an endeavour to transcend this capability down to machines. Inspired by the promise and power of probabilistic deep learning, we propose a novel variational inference network for few-shot classification (coined as TRIDENT) to decouple the representation of an image into semantic and label latent variables, and simultaneously infer them in an intertwined fashion. To induce task-awareness, as part of the inference mechanics of TRIDENT, we exploit information across both query and support images of a few-shot task using a novel built-in attention-based transductive feature extraction module (we call AttFEX). Our extensive experimental results corroborate the efficacy of TRIDENT and demonstrate that, using the simplest of backbones, it sets a new state-of-the-art in the most commonly adopted datasets miniImageNet and tieredImageNet (offering up to 4% and 5% improvements, respectively), as well as for the recent challenging cross-domain miniImagenet --> CUB scenario offering a significant margin (up to 20% improvement) beyond the best existing cross-domain baselines. Code and experimentation can be found in our GitHub repository: https://github.com/anujinho/trident

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源