论文标题
Spike-Flownet:具有节能杂种神经网络的基于事件的光流估计
Spike-FlowNet: Event-based Optical Flow Estimation with Energy-Efficient Hybrid Neural Networks
论文作者
论文摘要
基于事件的摄像机在各种任务中都具有巨大的潜力,例如在常规基于框架的相机严重遭受严重遭受的弱光环境中的高速运动检测和导航。这归因于它们的高时间分辨率,高动态范围和低功率消耗。但是,传统的计算机视觉方法以及深层模拟神经网络(ANN)不适合与事件相机输出的异步和离散性质合作。尖峰神经网络(SNN)是处理事件摄像头输出的理想范式,但是由于尖峰消失的现象,深SNN在性能方面受到了影响。为了克服这些问题,我们提出了Spike-Flownet,这是一种深层混合神经网络体系结构,集成了SNN和ANN,以有效地从稀疏事件摄像机输出中估算光流而无需牺牲性能。该网络对多车立体活动摄像机(MVSEC)数据集进行了自我监督学习的端到端训练。 Spike-Flownet在光流预测能力方面优于基于ANN的相应方法,同时提供了显着的计算效率。
Event-based cameras display great potential for a variety of tasks such as high-speed motion detection and navigation in low-light environments where conventional frame-based cameras suffer critically. This is attributed to their high temporal resolution, high dynamic range, and low-power consumption. However, conventional computer vision methods as well as deep Analog Neural Networks (ANNs) are not suited to work well with the asynchronous and discrete nature of event camera outputs. Spiking Neural Networks (SNNs) serve as ideal paradigms to handle event camera outputs, but deep SNNs suffer in terms of performance due to the spike vanishing phenomenon. To overcome these issues, we present Spike-FlowNet, a deep hybrid neural network architecture integrating SNNs and ANNs for efficiently estimating optical flow from sparse event camera outputs without sacrificing the performance. The network is end-to-end trained with self-supervised learning on Multi-Vehicle Stereo Event Camera (MVSEC) dataset. Spike-FlowNet outperforms its corresponding ANN-based method in terms of the optical flow prediction capability while providing significant computational efficiency.