论文标题
内索拉数据集以及无监督的单眼视觉探光和内窥镜视频的深度估计方法:Endo-Sfmlearner
EndoSLAM Dataset and An Unsupervised Monocular Visual Odometry and Depth Estimation Approach for Endoscopic Videos: Endo-SfMLearner
论文作者
论文摘要
深度学习技术有望开发密集的地形重建和内窥镜视频的姿势估计方法。但是,当前可用的数据集不支持有效的定量基准测试。在本文中,我们介绍了一个全面的内窥镜大满贯数据集,该数据集由六个猪器官,胶囊和标准内窥镜记录以及合成生成的数据组成的3D点云数据组成。熊猫机器人臂,两个可商购的胶囊内窥镜,两个具有不同摄像头性能的常规内窥镜以及两个高精度3D扫描仪,从8个前体病毒猪胃肠道(GI) - 提取器官那里收集数据。总共提供了35个子数据库,为前体部分提供了6D姿势地面真理:结肠的18个子数据集,胃的12个子数据集和小肠的5个子数据库,而其中四个包含胃肠病学专家进行的模拟脉搏性静态。包括深度和姿势注释的Gi Tract的合成胶囊内窥镜框架,以促进研究模拟转移学习算法的研究。此外,我们提出了一种无监督的单眼深度和姿势估计方法,该方法将残留网络与空间注意模块结合起来,以决定网络专注于可区分和高度纹理的组织区域。提出的方法利用了亮度感知的光度损失,以改善快速框架到框架照明变化下的鲁棒性。为了举例说明内索峰数据集的用例,与最新的ART相比,Endo-Sfmlearner的性能得到了广泛的比较。数据集的代码和链接可在https://github.com/capsuleendoscope/endoslam上公开获得。可以通过https://www.youtube.com/watch?v=g_lce0awwdq访问一个视频,展示了实验设置和过程。
Deep learning techniques hold promise to develop dense topography reconstruction and pose estimation methods for endoscopic videos. However, currently available datasets do not support effective quantitative benchmarking. In this paper, we introduce a comprehensive endoscopic SLAM dataset consisting of 3D point cloud data for six porcine organs, capsule and standard endoscopy recordings as well as synthetically generated data. A Panda robotic arm, two commercially available capsule endoscopes, two conventional endoscopes with different camera properties, and two high precision 3D scanners were employed to collect data from 8 ex-vivo porcine gastrointestinal (GI)-tract organs. In total, 35 sub-datasets are provided with 6D pose ground truth for the ex-vivo part: 18 sub-dataset for colon, 12 sub-datasets for stomach and 5 sub-datasets for small intestine, while four of these contain polyp-mimicking elevations carried out by an expert gastroenterologist. Synthetic capsule endoscopy frames from GI-tract with both depth and pose annotations are included to facilitate the study of simulation-to-real transfer learning algorithms. Additionally, we propound Endo-SfMLearner, an unsupervised monocular depth and pose estimation method that combines residual networks with spatial attention module in order to dictate the network to focus on distinguishable and highly textured tissue regions. The proposed approach makes use of a brightness-aware photometric loss to improve the robustness under fast frame-to-frame illumination changes. To exemplify the use-case of the EndoSLAM dataset, the performance of Endo-SfMLearner is extensively compared with the state-of-the-art. The codes and the link for the dataset are publicly available at https://github.com/CapsuleEndoscope/EndoSLAM. A video demonstrating the experimental setup and procedure is accessible through https://www.youtube.com/watch?v=G_LCe0aWWdQ.