论文标题

基于事件的光流的秘密

Secrets of Event-Based Optical Flow

论文作者

Shiba, Shintaro, Aoki, Yoshimitsu, Gallego, Guillermo

论文摘要

事件摄像头对场景动态做出响应,并提供了估计运动的优势。遵循最近基于图像的深度学习成就,事件摄像机的光流估计方法急于将基于图像的方法与事件数据相结合。但是,由于具有非常不同的属性,因此需要几个改编(数据转换,损失函数等)。我们开发了一种原则性的方法来扩展对比度最大化框架以估算仅事件的光流。我们研究关键要素:如何设计目标函数以防止过度拟合,如何扭曲事件以更好地处理遮挡,以及如何改善与多尺度原始事件的收敛性。有了这些关键要素,我们的方法在MVSEC基准的无监督方法中排名第一,并且在DSEC基准上具有竞争力。此外,我们的方法使我们能够在这些基准测试中揭示地面真相流的问题,并在将其转移到无监督的学习环境时会产生出色的结果。我们的代码可从https://github.com/tub-rip/event_based_optictal_flow获得

Event cameras respond to scene dynamics and offer advantages to estimate motion. Following recent image-based deep-learning achievements, optical flow estimation methods for event cameras have rushed to combine those image-based methods with event data. However, it requires several adaptations (data conversion, loss function, etc.) as they have very different properties. We develop a principled method to extend the Contrast Maximization framework to estimate optical flow from events alone. We investigate key elements: how to design the objective function to prevent overfitting, how to warp events to deal better with occlusions, and how to improve convergence with multi-scale raw events. With these key elements, our method ranks first among unsupervised methods on the MVSEC benchmark, and is competitive on the DSEC benchmark. Moreover, our method allows us to expose the issues of the ground truth flow in those benchmarks, and produces remarkable results when it is transferred to unsupervised learning settings. Our code is available at https://github.com/tub-rip/event_based_optical_flow

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源