论文标题

TRASSCG:用于透明对象深度完成和握住基线的大型现实世界数据集

TransCG: A Large-Scale Real-World Dataset for Transparent Object Depth Completion and a Grasping Baseline

论文作者

Fang, Hongjie, Fang, Hao-Shu, Xu, Sheng, Lu, Cewu

论文摘要

透明的物体在我们的日常生活中很常见,并且在自动生产线上经常处理。基于视力的强大机器人抓握和对这些物体的操纵将对自动化有益。但是,在这种情况下,大多数当前的握把算法都会失败,因为它们严重依赖于深度图像,而普通的深度传感器通常无法产生准确的深度信息,以用于透明物体,因为光的反射和折射光。在这项工作中,我们通过为透明的对象深度完成贡献一个大规模的现实数据集来解决此问题,该数据集包含来自130个不同场景的57,715个RGB-D图像。我们的数据集是第一个大规模的,现实世界中的数据集,可提供地面真相深度,表面正常,透明的面具,以各种各样和混乱的场景。跨域实验表明,我们的数据集更加通用,可以为模型提供更好的概括能力。此外,我们提出了一个端到端深度完成网络,该网络将RGB图像和不准确的深度图作为输入,并输出精制的深度图。实验表明,我们方法的效率,效率和鲁棒性优于以前的工作,并且能够处理有限的硬件资源下的高分辨率图像。真正的机器人实验表明,我们的方法也可以应用于新颖的透明对象,牢固地抓住。完整的数据集和我们的方法可在www.graspnet.net/transcg上公开获得

Transparent objects are common in our daily life and frequently handled in the automated production line. Robust vision-based robotic grasping and manipulation for these objects would be beneficial for automation. However, the majority of current grasping algorithms would fail in this case since they heavily rely on the depth image, while ordinary depth sensors usually fail to produce accurate depth information for transparent objects owing to the reflection and refraction of light. In this work, we address this issue by contributing a large-scale real-world dataset for transparent object depth completion, which contains 57,715 RGB-D images from 130 different scenes. Our dataset is the first large-scale, real-world dataset that provides ground truth depth, surface normals, transparent masks in diverse and cluttered scenes. Cross-domain experiments show that our dataset is more general and can enable better generalization ability for models. Moreover, we propose an end-to-end depth completion network, which takes the RGB image and the inaccurate depth map as inputs and outputs a refined depth map. Experiments demonstrate superior efficacy, efficiency and robustness of our method over previous works, and it is able to process images of high resolutions under limited hardware resources. Real robot experiments show that our method can also be applied to novel transparent object grasping robustly. The full dataset and our method are publicly available at www.graspnet.net/transcg

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源