论文标题

旨在使用结构正规化的深群集揭示无监督域适应的固有数据结构

Towards Uncovering the Intrinsic Data Structures for Unsupervised Domain Adaptation using Structurally Regularized Deep Clustering

论文作者

Tang, Hui, Zhu, Xiatian, Chen, Ke, Jia, Kui, Chen, C. L. Philip

论文摘要

无监督的域适应性(UDA)是学习分类模型,以对目标域上的未标记数据进行预测,鉴于在源域上标记了数据,该源域的分布与目标域有所不同。主流UDA方法努力学习与域的特征,以便在源特征上训练的分类器可以很容易地应用于目标。尽管已经取得了令人印象深刻的结果,但这些方法具有损害目标歧视的内在数据结构的潜在风险,从而引发了概括问题,尤其是在电感环境中的UDA任务。为了解决这个问题,我们是由UDA对跨域结构相似性的假设的动机,并建议通过约束聚类直接发现内在的目标歧视,在这种聚类中,我们使用呈现的结构源正规化来限制聚类解决方案,从而在相同的假设上呈现结构源正规化。从技术上讲,我们提出了一个结构正常的深群集的混合模型,该模型将目标数据的正则判别聚类与生成性数据集成在一起,因此我们将我们的方法称为H-SRDC。我们的混合模型基于一个深层聚类框架,该框架将网络预测的分布与辅助方面的分布之间的kullback-leibler差异最小化,我们通过学习域共享的分类器和集群质心实施结构正则化。通过丰富结构相似性假设,我们能够扩展H-SRDC,以实现像素级UDA的语义分割任务。我们对图像分类和语义分割的七个UDA基准进行了广泛的实验。没有明确的特征对齐,我们提出的H-SRDC在电感和跨驱动设置下的所有现有方法都优于所有现有方法。我们在https://github.com/huitangtang/h-srdc上公开提供实施代码。

Unsupervised domain adaptation (UDA) is to learn classification models that make predictions for unlabeled data on a target domain, given labeled data on a source domain whose distribution diverges from the target one. Mainstream UDA methods strive to learn domain-aligned features such that classifiers trained on the source features can be readily applied to the target ones. Although impressive results have been achieved, these methods have a potential risk of damaging the intrinsic data structures of target discrimination, raising an issue of generalization particularly for UDA tasks in an inductive setting. To address this issue, we are motivated by a UDA assumption of structural similarity across domains, and propose to directly uncover the intrinsic target discrimination via constrained clustering, where we constrain the clustering solutions using structural source regularization that hinges on the very same assumption. Technically, we propose a hybrid model of Structurally Regularized Deep Clustering, which integrates the regularized discriminative clustering of target data with a generative one, and we thus term our method as H-SRDC. Our hybrid model is based on a deep clustering framework that minimizes the Kullback-Leibler divergence between the distribution of network prediction and an auxiliary one, where we impose structural regularization by learning domain-shared classifier and cluster centroids. By enriching the structural similarity assumption, we are able to extend H-SRDC for a pixel-level UDA task of semantic segmentation. We conduct extensive experiments on seven UDA benchmarks of image classification and semantic segmentation. With no explicit feature alignment, our proposed H-SRDC outperforms all the existing methods under both the inductive and transductive settings. We make our implementation codes publicly available at https://github.com/huitangtang/H-SRDC.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源