论文标题
记忆效率的分层神经体系结构搜索图像修复
Memory-Efficient Hierarchical Neural Architecture Search for Image Restoration
论文作者
论文摘要
最近,已经对神经建筑搜索(NAS)进行了很多关注,旨在超越那些在高级视力识别任务上手动设计的神经体系结构。受到成功的启发,我们在这里试图利用NAS技术来自动设计有效的网络体系结构,以实现低级图像恢复任务。特别是,我们提出了一个记忆效率的层次NAS(称为HINAS),并将其应用于两个这样的任务:图像DeNoising和图像超分辨率。 Hinas采用基于梯度的搜索策略,并构建灵活的层次搜索空间,包括内部搜索空间和外部搜索空间。他们负责设计细胞体系结构和决定细胞宽度。对于内部搜索空间,我们提出了层面架构共享策略(LWAS),从而提供了更灵活的体系结构和更好的性能。对于外部搜索空间,我们设计了一种单元格策略来节省内存,并大大加速了搜索速度。提出的HINAS方法既是内存和计算有效的。使用单个GTX1080TI GPU,在BSD-500数据集中搜索Denoisising网络仅需大约1小时,而在DIV2K数据集中搜索超分辨率结构的3.5小时。实验表明,HINAS发现的架构的参数较少,并且具有更快的推理速度,同时与最新方法相比,具有高度竞争性的性能。代码可在以下网址找到:https://github.com/hkzhang91/hinas
Recently, much attention has been spent on neural architecture search (NAS), aiming to outperform those manually-designed neural architectures on high-level vision recognition tasks. Inspired by the success, here we attempt to leverage NAS techniques to automatically design efficient network architectures for low-level image restoration tasks. In particular, we propose a memory-efficient hierarchical NAS (termed HiNAS) and apply it to two such tasks: image denoising and image super-resolution. HiNAS adopts gradient based search strategies and builds a flexible hierarchical search space, including the inner search space and outer search space. They are in charge of designing cell architectures and deciding cell widths, respectively. For the inner search space, we propose a layer-wise architecture sharing strategy (LWAS), resulting in more flexible architectures and better performance. For the outer search space, we design a cell-sharing strategy to save memory, and considerably accelerate the search speed. The proposed HiNAS method is both memory and computation efficient. With a single GTX1080Ti GPU, it takes only about 1 hour for searching for denoising network on the BSD-500 dataset and 3.5 hours for searching for the super-resolution structure on the DIV2K dataset. Experiments show that the architectures found by HiNAS have fewer parameters and enjoy a faster inference speed, while achieving highly competitive performance compared with state-of-the-art methods. Code is available at: https://github.com/hkzhang91/HiNAS