论文标题
半监督图像细分的自节奏和自持持续的共同训练
Self-paced and self-consistent co-training for semi-supervised image segmentation
论文作者
论文摘要
当稀缺的数据稀缺时,最近提出了深层共同培训作为图像分割的有效方法。在本文中,我们通过一种自定进度和自洽的共同训练方法改进了半监督分割的现有方法。为了帮助从未标记的图像中提取信息,我们首先设计了一种自定进度的学习策略,以进行共同训练,该策略使共同训练的神经网络首先关注易于细分区域,然后逐渐考虑更难的区域。这是通过以端到端微分损失形式实现的。此外,为了鼓励来自不同网络的预测既一致又自信,我们通过基于熵的不确定性正常化剂来增强这种广义的JSD损失。通过自我缩放的损失进一步提高了单个模型的鲁棒性,该损失可以在不同的训练迭代中执行其预测一致。我们使用标记数据的一小部分在三个具有不同图像模式的具有挑战性的图像分割问题上证明了我们方法的潜力。结果与标准共同训练基线相比,在性能方面显示出明显的优势,以及最近提出的半监督分段的最先进方法
Deep co-training has recently been proposed as an effective approach for image segmentation when annotated data is scarce. In this paper, we improve existing approaches for semi-supervised segmentation with a self-paced and self-consistent co-training method. To help distillate information from unlabeled images, we first design a self-paced learning strategy for co-training that lets jointly-trained neural networks focus on easier-to-segment regions first, and then gradually consider harder ones.This is achieved via an end-to-end differentiable loss inthe form of a generalized Jensen Shannon Divergence(JSD). Moreover, to encourage predictions from different networks to be both consistent and confident, we enhance this generalized JSD loss with an uncertainty regularizer based on entropy. The robustness of individual models is further improved using a self-ensembling loss that enforces their prediction to be consistent across different training iterations. We demonstrate the potential of our method on three challenging image segmentation problems with different image modalities, using small fraction of labeled data. Results show clear advantages in terms of performance compared to the standard co-training baselines and recently proposed state-of-the-art approaches for semi-supervised segmentation