论文标题

无数据知识蒸馏用于使用数据增强的gan进行分割

Data-free Knowledge Distillation for Segmentation using Data-Enriching GAN

论文作者

Bhogale, Kaushal

论文摘要

从庞大的预训练网络中提取知识以提高小型网络的性能,他们有利于深度学习模型用于许多实时和移动应用程序。几种证明该领域成功的方法已经利用了真正的培训数据集来提取相关知识。但是,在没有真正的数据集的情况下,从深网络中提取知识仍然是一个挑战。关于无数据知识蒸馏的最新研究表明了有关分类任务的技术。为此,我们探讨了用于分割任务的无数据知识蒸馏任务。首先,我们确定了针对细分特定的几个挑战。我们利用Degan培训框架提出了一种新型的损失功能,以在某些类别不足的情况下实现多样性。此外,我们探索了一个新的培训框架,用于在无数据设置中执行知识蒸馏。我们的平均值比以前的方法获得了6.93%的提高。

Distilling knowledge from huge pre-trained networks to improve the performance of tiny networks has favored deep learning models to be used in many real-time and mobile applications. Several approaches that demonstrate success in this field have made use of the true training dataset to extract relevant knowledge. In absence of the True dataset, however, extracting knowledge from deep networks is still a challenge. Recent works on data-free knowledge distillation demonstrate such techniques on classification tasks. To this end, we explore the task of data-free knowledge distillation for segmentation tasks. First, we identify several challenges specific to segmentation. We make use of the DeGAN training framework to propose a novel loss function for enforcing diversity in a setting where a few classes are underrepresented. Further, we explore a new training framework for performing knowledge distillation in a data-free setting. We get an improvement of 6.93% in Mean IoU over previous approaches.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源