论文标题
基于变压器的多语言文档嵌入模型
Transformer based Multilingual document Embedding model
论文作者
论文摘要
当前最新的多语言嵌入模型激光器之一是基于双向LSTM神经机器翻译模型。本文介绍了基于变压器的句子/文档嵌入模型T-Laser,该模型取得了三个重大改进。首先,Bilstm层被基于注意力的变压器层所取代,该图层更有能力在更长的文本中学习顺序模式。其次,由于没有复发,T激光器可以使编码器中的平行计算更快地生成文本嵌入。第三,我们通过额外的新距离限制损失来增强NMT翻译损失函数。这种距离限制损失将进一步使平行句子的嵌入在矢量空间中。我们称为训练有距离限制CT激光器的T-LASER模型。我们的CT激光模型既优于基于Bilstm的激光器和更简单的Transformer T-LASER,都大大胜过。
One of the current state-of-the-art multilingual document embedding model LASER is based on the bidirectional LSTM neural machine translation model. This paper presents a transformer-based sentence/document embedding model, T-LASER, which makes three significant improvements. Firstly, the BiLSTM layers is replaced by the attention-based transformer layers, which is more capable of learning sequential patterns in longer texts. Secondly, due to the absence of recurrence, T-LASER enables faster parallel computations in the encoder to generate the text embedding. Thirdly, we augment the NMT translation loss function with an additional novel distance constraint loss. This distance constraint loss would further bring the embeddings of parallel sentences close together in the vector space; we call the T-LASER model trained with distance constraint, cT-LASER. Our cT-LASER model significantly outperforms both BiLSTM-based LASER and the simpler transformer-based T-LASER.