论文标题
无监督的模型改编,以进行连续的语义分割
Unsupervised Model Adaptation for Continual Semantic Segmentation
论文作者
论文摘要
我们开发了一种用于调整语义分割模型的算法,该算法使用标记的源域进行训练,以在未标记的目标域中很好地概括。在无监督的域适应性(UDA)文献中已经广泛研究了类似的问题,但是现有的UDA算法需要访问标记的源域标记的数据和目标域未标记的数据,以训练域名域语义分割模型。放松此约束使用户能够适应预验证的模型在目标域中概括,而无需访问源数据。为此,我们学习了中间嵌入空间中源域的原型分布。此分布编码从源域中学到的抽象知识。然后,我们使用此分布将目标域分布与嵌入空间中的源域分布对齐。我们提供理论分析并解释算法有效的条件。基准适应任务的实验证明了我们的方法即使与联合UDA方法相比,也可以达到竞争性能。
We develop an algorithm for adapting a semantic segmentation model that is trained using a labeled source domain to generalize well in an unlabeled target domain. A similar problem has been studied extensively in the unsupervised domain adaptation (UDA) literature, but existing UDA algorithms require access to both the source domain labeled data and the target domain unlabeled data for training a domain agnostic semantic segmentation model. Relaxing this constraint enables a user to adapt pretrained models to generalize in a target domain, without requiring access to source data. To this end, we learn a prototypical distribution for the source domain in an intermediate embedding space. This distribution encodes the abstract knowledge that is learned from the source domain. We then use this distribution for aligning the target domain distribution with the source domain distribution in the embedding space. We provide theoretical analysis and explain conditions under which our algorithm is effective. Experiments on benchmark adaptation task demonstrate our method achieves competitive performance even compared with joint UDA approaches.