论文标题
DAN:针对新领域的姿态分类器调整的双视图表示学习
DAN: Dual-View Representation Learning for Adapting Stance Classifiers to New Domains
论文作者
论文摘要
我们通过调整域外分类器对域的适应性调整域中的分类器来解决在新领域中对立场分类有限的注释的问题。现有的方法通常会使单个全局特征空间(或视图)中的不同域与不同的域保持一致,这些域可能无法完全捕获用于表达立场的语言的丰富性,从而降低了立场数据的适应性。在本文中,我们确定了在语言上不同的两种主要类型的表达式,我们提出了一个量身定制的双视图适应网络(DAN),以适应跨领域的这些表达式。提出的模型首先学习了每个表达式通道中域传输的单独视图,然后选择两种视图的最佳部分以进行最佳传输。我们发现,学到的视图功能可以更容易地对齐,并且在两种视图中都可以更容易地对齐,并更加歧视,从而在结合视图后会导致更可转移的总体特征。广泛实验的结果表明,我们的方法可以增强跨不同域的姿势数据中最新的单视图方法,并且它一致地改善了各种适应任务的方法。
We address the issue of having a limited number of annotations for stance classification in a new domain, by adapting out-of-domain classifiers with domain adaptation. Existing approaches often align different domains in a single, global feature space (or view), which may fail to fully capture the richness of the languages used for expressing stances, leading to reduced adaptability on stance data. In this paper, we identify two major types of stance expressions that are linguistically distinct, and we propose a tailored dual-view adaptation network (DAN) to adapt these expressions across domains. The proposed model first learns a separate view for domain transfer in each expression channel and then selects the best adapted parts of both views for optimal transfer. We find that the learned view features can be more easily aligned and more stance-discriminative in either or both views, leading to more transferable overall features after combining the views. Results from extensive experiments show that our method can enhance the state-of-the-art single-view methods in matching stance data across different domains, and that it consistently improves those methods on various adaptation tasks.