论文标题

DSNA:直接神经体系结构搜索无参数retring

DSNAS: Direct Neural Architecture Search without Parameter Retraining

论文作者

Hu, Shoukang, Xie, Sirui, Zheng, Hehui, Liu, Chunxiao, Shi, Jianping, Liu, Xunying, Lin, Dahua

论文摘要

如果NAS方法是解决方案,那是什么问题?大多数现有的NAS方法都需要两阶段参数优化。但是,在两个阶段中相同体系结构的性能差异很大。在这项工作中,我们根据该观察结果为NAS,特定于任务的端到端提出了一个新的问题定义。我们认为,鉴于预期NAS方法的计算机视觉任务,该定义可以将模糊定义的NAS评估降低到i)i)此任务的准确性; ii)消耗的总计算最终以令人满意的精度获得模型。看到大多数现有方法无法直接解决此问题,我们提出了DSNA,这是一种有效的可区分NAS框架,同时通过低偏差的蒙特卡洛估计值优化体系结构和参数。可以直接部署源自DSNA的儿童网络而无需参数retraning。与两阶段方法相比,DSNA在420 GPU小时内成功地发现了Imagenet的准确性(74.4%)的网络,从而减少了总时间超过34%。我们的实施可从https://github.com/snas-series/snas-series获得。

If NAS methods are solutions, what is the problem? Most existing NAS methods require two-stage parameter optimization. However, performance of the same architecture in the two stages correlates poorly. In this work, we propose a new problem definition for NAS, task-specific end-to-end, based on this observation. We argue that given a computer vision task for which a NAS method is expected, this definition can reduce the vaguely-defined NAS evaluation to i) accuracy of this task and ii) the total computation consumed to finally obtain a model with satisfying accuracy. Seeing that most existing methods do not solve this problem directly, we propose DSNAS, an efficient differentiable NAS framework that simultaneously optimizes architecture and parameters with a low-biased Monte Carlo estimate. Child networks derived from DSNAS can be deployed directly without parameter retraining. Comparing with two-stage methods, DSNAS successfully discovers networks with comparable accuracy (74.4%) on ImageNet in 420 GPU hours, reducing the total time by more than 34%. Our implementation is available at https://github.com/SNAS-Series/SNAS-Series.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源