论文标题
在模拟中学习密集的视觉对应关系以平滑和折叠真实的织物
Learning Dense Visual Correspondences in Simulation to Smooth and Fold Real Fabrics
论文作者
论文摘要
机器人织物的操纵由于无限的尺寸构型空间,自嵌入和织物的复杂动力学而具有挑战性。在学习特定可变形操作任务的学习政策方面已经进行了重要的工作,但是对算法的关注相对较少,这些算法可以有效地学习许多不同的任务。在本文中,我们学习了模拟中不同配置跨不同配置的可变形织物的视觉对应,并表明该表示形式可用于设计各种任务的策略。鉴于从初始面料配置中进行了新任务的一次证明,可以使用学习的对应关系来计算新的面料配置中的几何等效动作。这使得有可能在多个物理机器人系统上稳健地模仿一系列多步织物平滑和折叠任务。在两个不同的机器人系统(Da Vinci手术机器人和ABB Yumi)上,实现了10个面料操纵任务的平均任务成功率80.3%。结果还表明,对各种颜色,尺寸和形状的织物的鲁棒性。有关补充材料和视频,请参见https://tinyurl.com/fabric-descriptor。
Robotic fabric manipulation is challenging due to the infinite dimensional configuration space, self-occlusion, and complex dynamics of fabrics. There has been significant prior work on learning policies for specific deformable manipulation tasks, but comparatively less focus on algorithms which can efficiently learn many different tasks. In this paper, we learn visual correspondences for deformable fabrics across different configurations in simulation and show that this representation can be used to design policies for a variety of tasks. Given a single demonstration of a new task from an initial fabric configuration, the learned correspondences can be used to compute geometrically equivalent actions in a new fabric configuration. This makes it possible to robustly imitate a broad set of multi-step fabric smoothing and folding tasks on multiple physical robotic systems. The resulting policies achieve 80.3% average task success rate across 10 fabric manipulation tasks on two different robotic systems, the da Vinci surgical robot and the ABB YuMi. Results also suggest robustness to fabrics of various colors, sizes, and shapes. See https://tinyurl.com/fabric-descriptors for supplementary material and videos.