论文标题

使用基于深度学习的视觉伺服宣传的软机器人的自主腔内导航

Autonomous Intraluminal Navigation of a Soft Robot using Deep-Learning-based Visual Servoing

论文作者

Lazo, Jorge F., Lai, Chun-Feng, Moccia, Sara, Rosa, Benoit, Catellani, Michele, de Mathelin, Michel, Ferrigno, Giancarlo, Breedveld, Paul, Dankelman, Jenny, De Momi, Elena

论文摘要

Luminal器官内部的导航是一项艰巨的任务,需要在操作员的手移动与从内窥镜视频获得的信息之间进行非直觉的协调。开发自动化某些任务的工具可以减轻干预期间医生的身体和精神负担,从而使他们专注于诊断和决策任务。在本文中,我们提出了一种用于腔内导航的协同解决方案,该解决方案由3D打印的内窥镜软机器人组成,该机器人可以在腔内结构内安全移动。基于卷积神经网络(CNN)的Visual Servoing用于完成自主导航任务。 CNN经过幻象和体内数据训练以分割管腔,并提出了一种无模型的方法来控制受约束环境中的运动。所提出的机器人在不同路径配置中的解剖幻象中进行了验证。我们使用不同的指标分析机器人的运动,例如任务完成时间,平滑度,稳态中的误差以及均值和最大误差。我们表明,我们的方法适合在空心环境和条件下安全导航,这些环境和条件与最初对网络进行训练的情况不同。

Navigation inside luminal organs is an arduous task that requires non-intuitive coordination between the movement of the operator's hand and the information obtained from the endoscopic video. The development of tools to automate certain tasks could alleviate the physical and mental load of doctors during interventions, allowing them to focus on diagnosis and decision-making tasks. In this paper, we present a synergic solution for intraluminal navigation consisting of a 3D printed endoscopic soft robot that can move safely inside luminal structures. Visual servoing, based on Convolutional Neural Networks (CNNs) is used to achieve the autonomous navigation task. The CNN is trained with phantoms and in-vivo data to segment the lumen, and a model-less approach is presented to control the movement in constrained environments. The proposed robot is validated in anatomical phantoms in different path configurations. We analyze the movement of the robot using different metrics such as task completion time, smoothness, error in the steady-state, and mean and maximum error. We show that our method is suitable to navigate safely in hollow environments and conditions which are different than the ones the network was originally trained on.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源