论文标题

SGDN:基于分割的GRASP检测网络,用于非对称三指抓手

SGDN: Segmentation-Based Grasp Detection Network For Unsymmetrical Three-Finger Gripper

论文作者

Wang, Dexin

论文摘要

在本文中,我们介绍了基于分割的GRASP检测网络(SGDN),以预测使用RGB图像的不对称三指机器人抓手的可行机器人抓握。目标的可行抓握应是具有相同抓地角和宽度的抓地力区域的集合。换句话说,简化的平面掌握表示应为像素级,而不是区域级别,例如五维抓握表示。因此,我们提出了一个像素级抓握表示,面向基础固定的三角形。它也更适合于不对称的三指握把,在抓住某些物体时无法对称地掌握,抓握角度为[0,2π),而不是[0,2π),而不是[0,2π),而不是[0,π),以预测适当的掌握区域及其相应的抓握角度,并预测其相应的抓地力和宽度。每个像素的可行导向的基固定三角形掌握表示。在重新注销的Cornell掌握数据集中,我们的模型分别达到了96.8%和92.27%的精度,分别在图像划分和对象划分上,并获得与正式方法一致的准确预测。

In this paper, we present Segmentation-Based Grasp Detection Network (SGDN) to predict a feasible robotic grasping for a unsymmetrical three-finger robotic gripper using RGB images. The feasible grasping of a target should be a collection of grasp regions with the same grasp angle and width. In other words, a simplified planar grasp representation should be pixel-level rather than region-level such as five-dimensional grasp representation.Therefore, we propose a pixel-level grasp representation, oriented base-fixed triangle. It is also more suitable for unsymmetrical three-finger gripper which cannot grasp symmetrically when grasping some objects, the grasp angle is at [0, 2π) instead of [0, π) of parallel plate gripper.In order to predict the appropriate grasp region and its corresponding grasp angle and width in the RGB image, SGDN uses DeepLabv3+ as a feature extractor, and uses a three-channel grasp predictor to predict feasible oriented base-fixed triangle grasp representation of each pixel.On the re-annotated Cornell Grasp Dataset, our model achieves an accuracy of 96.8% and 92.27% on image-wise split and object-wise split respectively, and obtains accurate predictions consistent with the state-of-the-art methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源