论文标题

提高语言理解和产生的双重推断

Dual Inference for Improving Language Understanding and Generation

论文作者

Su, Shang-Yu, Chuang, Yung-Sung, Chen, Yun-Nung

论文摘要

自然语言理解(NLU)和自然语言产生(NLG)任务具有牢固的双重关系,NLU的目的是根据自然语言的话语预测语义标签,而NLG则相反。先前的工作主要集中于利用模型培训中的二元性,以便获得更高的性能的模型。但是,对于当前NLP区域中模型的快速增长量表,有时我们可能很难重新训练整个NLU和NLG模型。为了更好地解决这个问题,本文提议在不需要再培训的情况下利用推理阶段的双重性。三个基准数据集的实验证明了该方法在NLU和NLG中的有效性,从而提供了实际使用的巨大潜力。

Natural language understanding (NLU) and Natural language generation (NLG) tasks hold a strong dual relationship, where NLU aims at predicting semantic labels based on natural language utterances and NLG does the opposite. The prior work mainly focused on exploiting the duality in model training in order to obtain the models with better performance. However, regarding the fast-growing scale of models in the current NLP area, sometimes we may have difficulty retraining whole NLU and NLG models. To better address the issue, this paper proposes to leverage the duality in the inference stage without the need of retraining. The experiments on three benchmark datasets demonstrate the effectiveness of the proposed method in both NLU and NLG, providing the great potential of practical usage.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源