论文标题
通过增强学习对3D形状进行建模
Modeling 3D Shapes by Reinforcement Learning
论文作者
论文摘要
我们探索如何启用机器使用深度强化学习(RL)等3D形状建模。在像Maya这样的3D建模软件中,建模者通常会分为两个步骤创建网格模型:(1)使用一组原语近似形状; (2)编辑原语的网格以创建详细的几何形状。受这些基于艺术家的建模的启发,我们提出了一个基于RL的两步神经框架,以学习3D建模政策。通过采取行动并在互动环境中收集奖励,代理商首先学会将目标形状解析为原始图,然后编辑几何形状。为了有效地训练建模剂,我们介绍了一种新颖的培训算法,结合了启发式政策,模仿学习和强化学习。我们的实验表明,代理可以学习出色的政策来产生常规和结构感知的网格模型,这证明了所提出的RL框架的可行性和有效性。
We explore how to enable machines to model 3D shapes like human modelers using deep reinforcement learning (RL). In 3D modeling software like Maya, a modeler usually creates a mesh model in two steps: (1) approximating the shape using a set of primitives; (2) editing the meshes of the primitives to create detailed geometry. Inspired by such artist-based modeling, we propose a two-step neural framework based on RL to learn 3D modeling policies. By taking actions and collecting rewards in an interactive environment, the agents first learn to parse a target shape into primitives and then to edit the geometry. To effectively train the modeling agents, we introduce a novel training algorithm that combines heuristic policy, imitation learning and reinforcement learning. Our experiments show that the agents can learn good policies to produce regular and structure-aware mesh models, which demonstrates the feasibility and effectiveness of the proposed RL framework.