论文标题

通过爬行

Improving End-to-End Speech-to-Intent Classification with Reptile

论文作者

Tian, Yusheng, Gorinski, Philip John

论文摘要

端到端的口语理解(SLU)系统比传统的管道系统具有许多优势,但是收集内域语音数据以训练端到端系统是昂贵且耗时的。一个问题是由此引起的:如何使用有限的数据训练端到端SLU?许多研究人员已经探索了利用其他相关数据资源的方法,通常是通过模型的预培训部分在高资源语音识别的情况下进行的。在本文中,我们建议使用爬行动物的非标准学习算法提高SLU模型的概括性能。尽管最初是针对模型远程学习的爬行动物,但我们认为它也可以用来直接学习目标任务,并且比常规梯度下降更好地概括了概括。在这项工作中,我们采用爬行动物来完成端到端口头意图分类的任务。在单独使用爬行动物的四个数据集和不同语言和域数据集上的实验显示出意图准确性的提高,并且除了在预训练之前使用。

End-to-end spoken language understanding (SLU) systems have many advantages over conventional pipeline systems, but collecting in-domain speech data to train an end-to-end system is costly and time consuming. One question arises from this: how to train an end-to-end SLU with limited amounts of data? Many researchers have explored approaches that make use of other related data resources, typically by pre-training parts of the model on high-resource speech recognition. In this paper, we suggest improving the generalization performance of SLU models with a non-standard learning algorithm, Reptile. Though Reptile was originally proposed for model-agnostic meta learning, we argue that it can also be used to directly learn a target task and result in better generalization than conventional gradient descent. In this work, we employ Reptile to the task of end-to-end spoken intent classification. Experiments on four datasets of different languages and domains show improvement of intent prediction accuracy, both when Reptile is used alone and used in addition to pre-training.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源