论文标题
有效推断神经机器翻译
Efficient Inference For Neural Machine Translation
论文作者
论文摘要
大型变压器模型已在神经机器翻译中实现了最先进的结果,并已成为现场的标准。在这项工作中,我们寻找已知技术的最佳组合来优化推理速度而不牺牲翻译质量。我们进行了一项实证研究,该研究堆叠了各种方法,并证明将解码器自我注意力与简化的复发单位组合结合,采用深层编码和浅的解码器结构和多头注意修剪可以分别实现109%和84%的速度和84%的速度和84%的速度,并在CPU和GPU上降低了25%的数量,并降低了25%的数量。
Large Transformer models have achieved state-of-the-art results in neural machine translation and have become standard in the field. In this work, we look for the optimal combination of known techniques to optimize inference speed without sacrificing translation quality. We conduct an empirical study that stacks various approaches and demonstrates that combination of replacing decoder self-attention with simplified recurrent units, adopting a deep encoder and a shallow decoder architecture and multi-head attention pruning can achieve up to 109% and 84% speedup on CPU and GPU respectively and reduce the number of parameters by 25% while maintaining the same translation quality in terms of BLEU.