论文标题

SE(3) - 转化器:3D Roto-Translation Epovariant注意网络

SE(3)-Transformers: 3D Roto-Translation Equivariant Attention Networks

论文作者

Fuchs, Fabian B., Worrall, Daniel E., Fischer, Volker, Welling, Max

论文摘要

我们介绍了SE(3) - 转化器,这是3D点云和图形的自我发项模块的变体,在连续的3D Roto-Translations下,它是均等的。在存在数据输入的滋扰转换的情况下确保稳定且可预测的性能很重要。在模型中,重量的积极推论正在增加。 SE(3) - 转化器利用自我注意力的好处在大点云上运行,并保证SE(3) - 稳健性的均衡性。我们在玩具N体粒子模拟数据集上评估了我们的模型,并展示了输入旋转下预测的鲁棒性。我们在两个现实世界数据集(Scanobjectnn and QM9)上进一步实现了竞争性能。在所有情况下,我们的模型都优于强大的,非等价的注意力基线和无关注的模型。

We introduce the SE(3)-Transformer, a variant of the self-attention module for 3D point clouds and graphs, which is equivariant under continuous 3D roto-translations. Equivariance is important to ensure stable and predictable performance in the presence of nuisance transformations of the data input. A positive corollary of equivariance is increased weight-tying within the model. The SE(3)-Transformer leverages the benefits of self-attention to operate on large point clouds and graphs with varying number of points, while guaranteeing SE(3)-equivariance for robustness. We evaluate our model on a toy N-body particle simulation dataset, showcasing the robustness of the predictions under rotations of the input. We further achieve competitive performance on two real-world datasets, ScanObjectNN and QM9. In all cases, our model outperforms a strong, non-equivariant attention baseline and an equivariant model without attention.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源