论文标题

leetransformer:谎言群体的e夫自我注意力

LieTransformer: Equivariant self-attention for Lie Groups

论文作者

Hutchinson, Michael, Lan, Charline Le, Zaidi, Sheheryar, Dupont, Emilien, Teh, Yee Whye, Kim, Hyunjik

论文摘要

组模棱两可的神经网络用作组不变神经网络的构建基块,这些神经网络已证明可以通过原则参数共享提高概括性能和数据效率。这样的作品主要集中在群体模棱两可的卷积上,这是基于群体模棱两可的线性地图必然是卷积的。在这项工作中,我们将文献的范围扩展到了自我注意事项,这正成为深度学习模型的重要组成部分。我们提出了Lietransformer,这是由谎言层组成的架构,这些层是与任意的谎言组及其离散子组一样。我们通过在各种任务上展示与基线方法有竞争力的实验结果来证明我们方法的普遍性:在点云上的形状计数,分子属性回归和在汉密尔顿动力学下的粒子轨迹建模。

Group equivariant neural networks are used as building blocks of group invariant neural networks, which have been shown to improve generalisation performance and data efficiency through principled parameter sharing. Such works have mostly focused on group equivariant convolutions, building on the result that group equivariant linear maps are necessarily convolutions. In this work, we extend the scope of the literature to self-attention, that is emerging as a prominent building block of deep learning models. We propose the LieTransformer, an architecture composed of LieSelfAttention layers that are equivariant to arbitrary Lie groups and their discrete subgroups. We demonstrate the generality of our approach by showing experimental results that are competitive to baseline methods on a wide range of tasks: shape counting on point clouds, molecular property regression and modelling particle trajectories under Hamiltonian dynamics.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源