论文标题

内核自我发作在深度多个实例学习中

Kernel Self-Attention in Deep Multiple Instance Learning

论文作者

Rymarczyk, Dawid, Borowa, Adriana, Tabor, Jacek, Zieliński, Bartosz

论文摘要

并非所有监督的学习问题都由一对固定大小的输入张量和标签描述。在某些情况下,尤其是在医学图像分析中,标签对应于一袋实例(例如图像贴片),并对此类包进行分类,需要从所有实例中汇总信息。已经有多次尝试创建一个与一袋实例一起使用的模型,但是,他们假设包内没有依赖项,并且标签至少连接到一个实例。在这项工作中,我们介绍了基于注意力集中的MIL Pooling(SA-ABMILP)聚合操作,以说明实例之间的依赖性。我们对MNIST,组织学,微生物学和视网膜数据库进行了几项实验,以表明SA-ABMILP的性能比其他模型更好。此外,我们研究了自我注意事项的内核变化及其对结果的影响。

Not all supervised learning problems are described by a pair of a fixed-size input tensor and a label. In some cases, especially in medical image analysis, a label corresponds to a bag of instances (e.g. image patches), and to classify such bag, aggregation of information from all of the instances is needed. There have been several attempts to create a model working with a bag of instances, however, they are assuming that there are no dependencies within the bag and the label is connected to at least one instance. In this work, we introduce Self-Attention Attention-based MIL Pooling (SA-AbMILP) aggregation operation to account for the dependencies between instances. We conduct several experiments on MNIST, histological, microbiological, and retinal databases to show that SA-AbMILP performs better than other models. Additionally, we investigate kernel variations of Self-Attention and their influence on the results.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源