论文标题
Linformer:线性复杂性的自我注意力
Linformer: Self-Attention with Linear Complexity
论文作者
论文摘要
大型变压器模型在实现最新的许多自然语言处理应用方面取得了非凡的成功。但是,对于长序列而言,训练和部署这些模型的成本可能会高昂,因为变压器的标准自我发项机制使用$ o(n^2)$时间和空间相对于序列长度。在本文中,我们证明了自我发挥作用机制可以通过低级别矩阵近似。我们进一步利用这一发现提出了一种新的自我发挥机制,这在时间和空间中都从$ o(n^2)$降低到$ o(n)$。生成的线性变压器\ textIt {linformer}以标准变压器模型的标准性能,同时更具内存和时间效率。
Large transformer models have shown extraordinary success in achieving state-of-the-art results in many natural language processing applications. However, training and deploying these models can be prohibitively costly for long sequences, as the standard self-attention mechanism of the Transformer uses $O(n^2)$ time and space with respect to sequence length. In this paper, we demonstrate that the self-attention mechanism can be approximated by a low-rank matrix. We further exploit this finding to propose a new self-attention mechanism, which reduces the overall self-attention complexity from $O(n^2)$ to $O(n)$ in both time and space. The resulting linear transformer, the \textit{Linformer}, performs on par with standard Transformer models, while being much more memory- and time-efficient.