论文标题

KD-MVS:基于知识蒸馏的多视图立体声

KD-MVS: Knowledge Distillation Based Self-supervised Learning for Multi-view Stereo

论文作者

Ding, Yikang, Zhu, Qingtian, Liu, Xiangyue, Yuan, Wentao, Zhang, Haotian, Zhang, Chi

论文摘要

有监督的多视图立体声(MVS)方法在重建质量方面取得了显着进步,但遭受了收集大规模基础真相深度的挑战。在本文中,我们根据知识蒸馏提出了一种新型的自我监督培训管道,称为MVS,称为KD-MV,主要由自我监督的教师培训和基于蒸馏的学生培训组成。具体而言,使用光度和特征一致性同时以自我监督的方式对教师模型进行了训练。然后,我们通过概率知识转移将教师模型的知识提炼为学生模型。在对经过验证的知识的监督下,学生模型能够以很大的优势胜过其老师。在多个数据集上进行的大量实验表明,我们的方法甚至可以胜过监督方法。

Supervised multi-view stereo (MVS) methods have achieved remarkable progress in terms of reconstruction quality, but suffer from the challenge of collecting large-scale ground-truth depth. In this paper, we propose a novel self-supervised training pipeline for MVS based on knowledge distillation, termed KD-MVS, which mainly consists of self-supervised teacher training and distillation-based student training. Specifically, the teacher model is trained in a self-supervised fashion using both photometric and featuremetric consistency. Then we distill the knowledge of the teacher model to the student model through probabilistic knowledge transferring. With the supervision of validated knowledge, the student model is able to outperform its teacher by a large margin. Extensive experiments performed on multiple datasets show our method can even outperform supervised methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源