论文标题

我们可以涵盖全面分割视力障碍的导航感知需求吗?

Can we cover navigational perception needs of the visually impaired by panoptic segmentation?

论文作者

Mao, Wei, Zhang, Jiaming, Yang, Kailun, Stiefelhagen, Rainer

论文摘要

基于经典和深度学习的分割方法,对视力障碍者的导航感知已得到了大量促进。在经典的视觉识别方法中,分割模型主要取决于对象,这意味着必须为感兴趣的对象设计特定的算法。相反,基于深度学习的模型,例如实例细分和语义分割,可以单独识别整个场景的一部分,即盲人。但是,他们俩都无法为视力障碍提供对周围环境的整体理解。 Panoptic分割是一个新提出的视觉模型,其目的是统一语义分割和实例分割。由此激励,我们建议利用全景分段作为一种方法,以在视觉上的事物和事物意识中提供视力障碍的方式,以在视觉上损害的距离内提出意识。我们证明,泛型分割能够通过可穿戴的辅助系统为视障障碍带来整体现实世界的感知。

Navigational perception for visually impaired people has been substantially promoted by both classic and deep learning based segmentation methods. In classic visual recognition methods, the segmentation models are mostly object-dependent, which means a specific algorithm has to be devised for the object of interest. In contrast, deep learning based models such as instance segmentation and semantic segmentation allow to individually recognize part of the entire scene, namely things or stuff, for blind individuals. However, both of them can not provide a holistic understanding of the surroundings for the visually impaired. Panoptic segmentation is a newly proposed visual model with the aim of unifying semantic segmentation and instance segmentation. Motivated by that, we propose to utilize panoptic segmentation as an approach to navigating visually impaired people by offering both things and stuff awareness in the proximity of the visually impaired. We demonstrate that panoptic segmentation is able to equip the visually impaired with a holistic real-world scene perception through a wearable assistive system.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源