论文标题
Omnitact:多向高分辨率触摸传感器
OmniTact: A Multi-Directional High Resolution Touch Sensor
论文作者
论文摘要
将触摸作为机器人的感应方式融合可以使更精细,更强大的操纵技巧。现有的触觉传感器要么是平坦的,要么具有较小的敏感场,要么仅提供低分辨率信号。在本文中,我们引入了Amnitact,这是一种多向高分辨率触觉传感器。 Omnitact设计为用机器人手用作机器人操纵的指尖,并使用多个微型胶片来检测基于凝胶的皮肤的多方向变形。这提供了丰富的信号,可以使用现代图像处理和计算机视觉方法来推断各种不同的接触状态变量。我们在具有挑战性的机器人控制任务上评估了全能的功能,该任务需要将电连接器插入插座中,以及一个状态估计问题,该问题代表了敏捷的机器人操作中通常遇到的那些人,目的是推断弯曲手指压力靠在物体上的接触角度。这两个任务均仅使用触摸传感和深度卷积神经网络来处理传感器相机的图像。我们将仅在一侧敏感的最先进的触觉传感器以及最先进的多方向触觉传感器进行比较,并发现全能的高分辨率和多个方向传感的组合对于可靠地插入电气连接器并允许在状态估计任务中插入更高的精度。可以在https://sites.google.com/berkeley.edu/omnitact上找到视频和补充材料
Incorporating touch as a sensing modality for robots can enable finer and more robust manipulation skills. Existing tactile sensors are either flat, have small sensitive fields or only provide low-resolution signals. In this paper, we introduce OmniTact, a multi-directional high-resolution tactile sensor. OmniTact is designed to be used as a fingertip for robotic manipulation with robotic hands, and uses multiple micro-cameras to detect multi-directional deformations of a gel-based skin. This provides a rich signal from which a variety of different contact state variables can be inferred using modern image processing and computer vision methods. We evaluate the capabilities of OmniTact on a challenging robotic control task that requires inserting an electrical connector into an outlet, as well as a state estimation problem that is representative of those typically encountered in dexterous robotic manipulation, where the goal is to infer the angle of contact of a curved finger pressing against an object. Both tasks are performed using only touch sensing and deep convolutional neural networks to process images from the sensor's cameras. We compare with a state-of-the-art tactile sensor that is only sensitive on one side, as well as a state-of-the-art multi-directional tactile sensor, and find that OmniTact's combination of high-resolution and multi-directional sensing is crucial for reliably inserting the electrical connector and allows for higher accuracy in the state estimation task. Videos and supplementary material can be found at https://sites.google.com/berkeley.edu/omnitact