论文标题

nerfels:可渲染的神经代码,以改进相机姿势估计

Nerfels: Renderable Neural Codes for Improved Camera Pose Estimation

论文作者

Avraham, Gil, Straub, Julian, Shen, Tianwei, Yang, Tsun-Yi, Germain, Hugo, Sweeney, Chris, Balntas, Vasileios, Novotny, David, DeTone, Daniel, Newcombe, Richard

论文摘要

本文提出了一个框架,将传统的基于基于关键的相机姿势优化与可逆神经渲染机制相结合。我们提出的3D场景表示Nerfels在本地密集而全球稀疏。与现有的可逆神经渲染系统相反,该系统将模型过于拟合整个场景,我们采用了一种功能驱动的方法来表示带有可呈现代码的场景 - 局部3D局部3D补丁。通过仅在检测到本地特征的场景中建模,我们的框架可以通过神经渲染器中的优化代码调节机制有效地概括场景中的本地区域,同时保持了稀疏3D地图表示的低内存足迹。我们的模型可以纳入现有的最先进的手工制作和学习的本地功能姿势估计器,从而在评估扫描仪的宽相机基线方案时会提高性能。

This paper presents a framework that combines traditional keypoint-based camera pose optimization with an invertible neural rendering mechanism. Our proposed 3D scene representation, Nerfels, is locally dense yet globally sparse. As opposed to existing invertible neural rendering systems which overfit a model to the entire scene, we adopt a feature-driven approach for representing scene-agnostic, local 3D patches with renderable codes. By modelling a scene only where local features are detected, our framework effectively generalizes to unseen local regions in the scene via an optimizable code conditioning mechanism in the neural renderer, all while maintaining the low memory footprint of a sparse 3D map representation. Our model can be incorporated to existing state-of-the-art hand-crafted and learned local feature pose estimators, yielding improved performance when evaluating on ScanNet for wide camera baseline scenarios.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源