论文标题

任务学:大规模利用任务关系

Taskology: Utilizing Task Relations at Scale

论文作者

Lu, Yao, Pirk, Sören, Dlabal, Jan, Brohan, Anthony, Pasad, Ankita, Chen, Zhao, Casser, Vincent, Angelova, Anelia, Gordon, Ariel

论文摘要

许多计算机视觉任务解决了场景理解的问题,并且自然相互关联,例如对象分类,检测,场景细分,深度估计等。我们表明,我们可以在任务集合中的固有关系,因为它们是经过培训的,通过一致性损失通过其已知关系互相监督。此外,明确利用任务之间的关系可以改善其性能,同时大大减少对标记数据的需求,并允许使用其他无监督或模拟数据进行培训。我们展示了具有任务级并行性的分布式联合培训算法,该算法具有高度的异步性和鲁棒性。这允许跨多个任务或大量输入数据的学习。我们在以下任务集合的子集上演示了我们的框架:深度和正常预测,语义分割,3D运动和自我运动估计以及对象跟踪和对象跟踪以及点云中的3D检测。我们观察到这些任务中的性能提高,尤其是在低标签制度中。

Many computer vision tasks address the problem of scene understanding and are naturally interrelated e.g. object classification, detection, scene segmentation, depth estimation, etc. We show that we can leverage the inherent relationships among collections of tasks, as they are trained jointly, supervising each other through their known relationships via consistency losses. Furthermore, explicitly utilizing the relationships between tasks allows improving their performance while dramatically reducing the need for labeled data, and allows training with additional unsupervised or simulated data. We demonstrate a distributed joint training algorithm with task-level parallelism, which affords a high degree of asynchronicity and robustness. This allows learning across multiple tasks, or with large amounts of input data, at scale. We demonstrate our framework on subsets of the following collection of tasks: depth and normal prediction, semantic segmentation, 3D motion and ego-motion estimation, and object tracking and 3D detection in point clouds. We observe improved performance across these tasks, especially in the low-label regime.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源