论文标题

学会通过积极的知识转移来交谈

Learn to Talk via Proactive Knowledge Transfer

论文作者

Sun, Qing, Cross, James

论文摘要

知识转移已应用于解决各种问题。例如,可以在任务之间转移知识(例如,通过利用先验知识来处理新的情况),或者可以在代理之间(例如,从没有直接经验的他人那里学习)。在不失去一般性的情况下,我们将知识转移与KL-Divergence最小化,即与学习者和教师的(信仰)分布相匹配。等效性使我们有了新的观点,可以通过研究学习者如何构建与教师的互动以获取知识来理解KL-Divermence的变体。在本文中,我们对前向和向后订单中的KL差异最小化进行了深入的分析,这表明学习者通过向后进行的policy学习得到加强。相比之下,学习者是向前监督的。此外,我们的分析是基于梯度的,因此可以将其推广到任意任务,并有助于确定给定任务属性最小化的顺序。通过在知识蒸馏中向后替换,我们观察到 +0.7-1.1 BLEU在WMT'17 DE-EN和IWSLT'15 TH-EN机器翻译任务上获得了 +0.7-1.1。

Knowledge Transfer has been applied in solving a wide variety of problems. For example, knowledge can be transferred between tasks (e.g., learning to handle novel situations by leveraging prior knowledge) or between agents (e.g., learning from others without direct experience). Without loss of generality, we relate knowledge transfer to KL-divergence minimization, i.e., matching the (belief) distributions of learners and teachers. The equivalence gives us a new perspective in understanding variants of the KL-divergence by looking at how learners structure their interaction with teachers in order to acquire knowledge. In this paper, we provide an in-depth analysis of KL-divergence minimization in Forward and Backward orders, which shows that learners are reinforced via on-policy learning in Backward. In contrast, learners are supervised in Forward. Moreover, our analysis is gradient-based, so it can be generalized to arbitrary tasks and help to decide which order to minimize given the property of the task. By replacing Forward with Backward in Knowledge Distillation, we observed +0.7-1.1 BLEU gains on the WMT'17 De-En and IWSLT'15 Th-En machine translation tasks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源