论文标题

强大的域适应:表示,权重和电感偏差

Robust Domain Adaptation: Representations, Weights and Inductive Bias

论文作者

Bouvier, Victor, Very, Philippe, Chastagnol, Clément, Tami, Myriam, Hudelot, Céline

论文摘要

在过去的十年中,无监督的领域适应(UDA)引起了很多关注。域不变表示(IR)的出现已极大地提高了表示形式从标记的源域到新的和未标记的目标域的转移性。但是,这种方法的潜在陷阱,即\ textit {label shift}的存在。一些作品通过加权样本获得的轻松域不变性来解决此问题,这种策略通常称为重要性抽样。从我们的角度来看,尚未深入研究重要性抽样和不变性表示方式的理论方面。在目前的工作中,我们提出了目标风险的界限,其中既包含权重和不变表示形式。我们的理论分析强调了归纳偏差在对齐范围内分布中的作用。我们通过为UDA提出新的学习程序来以标准基准进行说明。我们从经验上观察到弱归纳偏置使适应性更强。较强的感应偏置的阐述是新的UDA算法的有希望的方向。

Unsupervised Domain Adaptation (UDA) has attracted a lot of attention in the last ten years. The emergence of Domain Invariant Representations (IR) has improved drastically the transferability of representations from a labelled source domain to a new and unlabelled target domain. However, a potential pitfall of this approach, namely the presence of \textit{label shift}, has been brought to light. Some works address this issue with a relaxed version of domain invariance obtained by weighting samples, a strategy often referred to as Importance Sampling. From our point of view, the theoretical aspects of how Importance Sampling and Invariant Representations interact in UDA have not been studied in depth. In the present work, we present a bound of the target risk which incorporates both weights and invariant representations. Our theoretical analysis highlights the role of inductive bias in aligning distributions across domains. We illustrate it on standard benchmarks by proposing a new learning procedure for UDA. We observed empirically that weak inductive bias makes adaptation more robust. The elaboration of stronger inductive bias is a promising direction for new UDA algorithms.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源