论文标题

学会从基于视觉的触觉烙印中合成体积网格

Learning to Synthesize Volumetric Meshes from Vision-based Tactile Imprints

论文作者

Zhu, Xinghao, Jain, Siddarth, Tomizuka, Masayoshi, van Baar, Jeroen

论文摘要

基于视觉的触觉传感器通常使用可变形的弹性体和安装在上方的摄像头来提供触点的高分辨率图像观测值。为变形弹性体获得准确的体积网格,可以提供直接的接触信息,并有益于机器人的抓握和操纵。本文着重于学习根据从基于视觉的触觉传感器中获得的图像烙印来合成弹性体的体积网格。合成图像网对和现实世界图像分别从3D有限元方法(FEM)和物理传感器中收集。引入了图形神经网络(GNN),以通过监督学习来学习图像到网格映射。提出了一种自我监督的适应方法和图像增强技术,以将网络从模拟转移到现实,从原始接触到看不见的触点,从一个传感器到另一个传感器。使用这些学到的和改编的网络,我们提出的方法可以准确地重建各个域中现实世界触觉传感器弹性体的变形,如定量和定性结果所示。

Vision-based tactile sensors typically utilize a deformable elastomer and a camera mounted above to provide high-resolution image observations of contacts. Obtaining accurate volumetric meshes for the deformed elastomer can provide direct contact information and benefit robotic grasping and manipulation. This paper focuses on learning to synthesize the volumetric mesh of the elastomer based on the image imprints acquired from vision-based tactile sensors. Synthetic image-mesh pairs and real-world images are gathered from 3D finite element methods (FEM) and physical sensors, respectively. A graph neural network (GNN) is introduced to learn the image-to-mesh mappings with supervised learning. A self-supervised adaptation method and image augmentation techniques are proposed to transfer networks from simulation to reality, from primitive contacts to unseen contacts, and from one sensor to another. Using these learned and adapted networks, our proposed method can accurately reconstruct the deformation of the real-world tactile sensor elastomer in various domains, as indicated by the quantitative and qualitative results.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源