论文标题
通过进化计算生成有效的DNN浓度
Generating Efficient DNN-Ensembles with Evolutionary Computation
论文作者
论文摘要
在这项工作中,我们利用合奏学习作为创建更快,更小,更准确的深度学习模型的工具。我们证明,我们可以通过组合DNN分类器共同优化准确性,推理时间和参数数量。为了实现这一目标,我们结合了多种合奏策略:装袋,提升和有序的分类器链。为了减少搜索过程中DNN集合评估的数量,我们提出了赚钱,这是一种进化方法,可根据用户指定的约束的三个目标优化集合。我们在10个图像分类数据集上赚取赚钱,最初在CPU和GPU平台上使用32个最先进的DCNN池,并且我们生成的型号的加速最高$ 7.60 \ times $ $,将参数降低$ 10 \ times $ $,或者准确的$ 6.01 \%\%\%\%\%\%\%$ dnn in the Beast Dnn in the Pool in the Pool。此外,我们的方法生成的型号比自动生成的最先进的方法快$ 5.6 \ times $。
In this work, we leverage ensemble learning as a tool for the creation of faster, smaller, and more accurate deep learning models. We demonstrate that we can jointly optimize for accuracy, inference time, and the number of parameters by combining DNN classifiers. To achieve this, we combine multiple ensemble strategies: bagging, boosting, and an ordered chain of classifiers. To reduce the number of DNN ensemble evaluations during the search, we propose EARN, an evolutionary approach that optimizes the ensemble according to three objectives regarding the constraints specified by the user. We run EARN on 10 image classification datasets with an initial pool of 32 state-of-the-art DCNN on both CPU and GPU platforms, and we generate models with speedups up to $7.60\times$, reductions of parameters by $10\times$, or increases in accuracy up to $6.01\%$ regarding the best DNN in the pool. In addition, our method generates models that are $5.6\times$ faster than the state-of-the-art methods for automatic model generation.