论文标题

neuform:自适应过度拟合神经形状编辑

NeuForm: Adaptive Overfitting for Neural Shape Editing

论文作者

Lin, Connor Z., Mitra, Niloy J., Wetzstein, Gordon, Guibas, Leonidas, Guerrero, Paul

论文摘要

神经表示是表示形状的流行,因为它们可以学习形式传感器数据,并用于数据清理,模型完成,形状编辑和形状合成。当前的神经表示可以归类为对单个对象实例的过度拟合,或者表示对象集合。但是,两者都不允许精确编辑神经场景表示:一方面,过度拟合对象实现高度准确的重建的方法,但不能推广到看不见的对象配置,因此无法支持编辑;另一方面,代表具有变化的对象家族的方法确实概括了,但仅产生近似重建。我们建议通过使用最合适的每个形状区域的一个:可靠数据的过拟合表示,以及可靠的可用数据以及其他任何地方的可推广表示形式,以适应过度拟合和可推广表示的优势。我们通过精心设计的体系结构和一种将两个表示网络权重融合在一起的方法,避免接缝和其他工件。我们展示了成功重新配置人类设计的形状的部分,例如椅子,表和灯,同时保留语义完整性和过度拟合形状表示的准确性。我们与两个最先进的竞争对手进行了比较,并在合理性和结果编辑方面表现出明显的改善。

Neural representations are popular for representing shapes, as they can be learned form sensor data and used for data cleanup, model completion, shape editing, and shape synthesis. Current neural representations can be categorized as either overfitting to a single object instance, or representing a collection of objects. However, neither allows accurate editing of neural scene representations: on the one hand, methods that overfit objects achieve highly accurate reconstructions, but do not generalize to unseen object configurations and thus cannot support editing; on the other hand, methods that represent a family of objects with variations do generalize but produce only approximate reconstructions. We propose NEUFORM to combine the advantages of both overfitted and generalizable representations by adaptively using the one most appropriate for each shape region: the overfitted representation where reliable data is available, and the generalizable representation everywhere else. We achieve this with a carefully designed architecture and an approach that blends the network weights of the two representations, avoiding seams and other artifacts. We demonstrate edits that successfully reconfigure parts of human-designed shapes, such as chairs, tables, and lamps, while preserving semantic integrity and the accuracy of an overfitted shape representation. We compare with two state-of-the-art competitors and demonstrate clear improvements in terms of plausibility and fidelity of the resultant edits.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源