论文标题

通过任务蒸馏适应域

Domain Adaptation Through Task Distillation

论文作者

Zhou, Brady, Kalra, Nimit, Krähenbühl, Philipp

论文摘要

深网吞噬了数百万个精确注释的图像,以构建其复杂而强大的表示形式。不幸的是,诸如自动驾驶的任务几乎没有现实世界的培训数据。反复将汽车撞到树上的太贵了。通常规定的解决方案很简单:在模拟中学习一个表示形式,然后将其转移到现实世界中。但是,由于模拟和现实世界的视觉体验差异很大,因此这种转移是具有挑战性的。我们的核心观察是,对于某些任务,例如图像识别,数据集很丰富。它们存在于任何有趣的域中,模拟或真实,并且易于标记和扩展。我们使用这些识别数据集将源和目标域链接起来,以在任务蒸馏框架中传输它们之间的模型。我们的方法可以在截然不同的模拟器之间成功传输导航策略:Vizdoom,Supertuxkart和Carla。此外,它显示了标准域适应基准测试的有希望的结果。

Deep networks devour millions of precisely annotated images to build their complex and powerful representations. Unfortunately, tasks like autonomous driving have virtually no real-world training data. Repeatedly crashing a car into a tree is simply too expensive. The commonly prescribed solution is simple: learn a representation in simulation and transfer it to the real world. However, this transfer is challenging since simulated and real-world visual experiences vary dramatically. Our core observation is that for certain tasks, such as image recognition, datasets are plentiful. They exist in any interesting domain, simulated or real, and are easy to label and extend. We use these recognition datasets to link up a source and target domain to transfer models between them in a task distillation framework. Our method can successfully transfer navigation policies between drastically different simulators: ViZDoom, SuperTuxKart, and CARLA. Furthermore, it shows promising results on standard domain adaptation benchmarks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源