论文标题

上下文标签学习:改善语义细分中的背景类表示

Context Label Learning: Improving Background Class Representations in Semantic Segmentation

论文作者

Li, Zeju, Kamnitsas, Konstantinos, Ouyang, Cheng, Chen, Chen, Glocker, Ben

论文摘要

背景样本为分割利益的区域(ROI)提供了关键的上下文信息。但是,它们始终涵盖各种结构集,从而使分割模型以高灵敏度和精确度学习良好的决策边界。这个问题涉及背景类别的高度异构性质,从而导致了多模式分布。从经验上讲,我们发现经过异质背景训练的神经网络努力绘制相应的上下文样本以在特征空间中紧凑簇。结果,背景logit激活上的分布可能会在决策边界上发生变化,从而导致在不同数据集和任务之间进行系统的过度分割。在这项研究中,我们提出上下文标签学习(COLAB),以通过将背景类别分解为几个子类来改善上下文表示。具体而言,我们将辅助网络作为任务生成器训练辅助网络以及主要的分割模型,以自动生成上下文标签,从而积极影响ROI分割精度。对几个具有挑战性的细分任务和数据集进行了广泛的实验。结果表明,Colab可以指导分割模型将背景样本的逻辑映射到决策边界,从而显着提高了分割精度。代码可用。

Background samples provide key contextual information for segmenting regions of interest (ROIs). However, they always cover a diverse set of structures, causing difficulties for the segmentation model to learn good decision boundaries with high sensitivity and precision. The issue concerns the highly heterogeneous nature of the background class, resulting in multi-modal distributions. Empirically, we find that neural networks trained with heterogeneous background struggle to map the corresponding contextual samples to compact clusters in feature space. As a result, the distribution over background logit activations may shift across the decision boundary, leading to systematic over-segmentation across different datasets and tasks. In this study, we propose context label learning (CoLab) to improve the context representations by decomposing the background class into several subclasses. Specifically, we train an auxiliary network as a task generator, along with the primary segmentation model, to automatically generate context labels that positively affect the ROI segmentation accuracy. Extensive experiments are conducted on several challenging segmentation tasks and datasets. The results demonstrate that CoLab can guide the segmentation model to map the logits of background samples away from the decision boundary, resulting in significantly improved segmentation accuracy. Code is available.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源