论文标题
量化DNN中的知识以解释分类知识蒸馏
Quantifying the Knowledge in a DNN to Explain Knowledge Distillation for Classification
论文作者
论文摘要
与从头开始的传统学习相比,知识蒸馏有时会使DNN实现出色的性能。本文提供了一种新的观点,可以根据信息理论来解释知识蒸馏的成功,即量化在DNN的中间层中编码的知识点。为此,我们将DNN中的信号处理视为丢弃层的信息。知识点被称为输入单元,其信息比其他输入单元所丢弃的信息要少得多。因此,我们根据知识点的量化提出了三个用于知识蒸馏的假设。 1。从知识蒸馏中学习的DNN编码比从头开始学习的DNN学习更多的知识点。 2。知识蒸馏使DNN更有可能同时学习不同的知识点。相比之下,从头开始的DNN学习倾向于顺序编码各种知识点。 3。从知识蒸馏中学习的DNN通常比从头开始学习的DNN学习更稳定。为了验证上述假设,我们设计了具有前景对象注释的三种类型的指标,以分析DNN,\ textit {i.e。}知识点的数量和质量,不同知识点的学习速度以及优化方向的稳定性。在实验中,我们诊断出各种DNN的不同分类任务,即图像分类,3D点云分类,二进制情感分类和问题回答,这些问题验证了上述假设。
Compared to traditional learning from scratch, knowledge distillation sometimes makes the DNN achieve superior performance. This paper provides a new perspective to explain the success of knowledge distillation, i.e., quantifying knowledge points encoded in intermediate layers of a DNN for classification, based on the information theory. To this end, we consider the signal processing in a DNN as the layer-wise information discarding. A knowledge point is referred to as an input unit, whose information is much less discarded than other input units. Thus, we propose three hypotheses for knowledge distillation based on the quantification of knowledge points. 1. The DNN learning from knowledge distillation encodes more knowledge points than the DNN learning from scratch. 2. Knowledge distillation makes the DNN more likely to learn different knowledge points simultaneously. In comparison, the DNN learning from scratch tends to encode various knowledge points sequentially. 3. The DNN learning from knowledge distillation is often optimized more stably than the DNN learning from scratch. In order to verify the above hypotheses, we design three types of metrics with annotations of foreground objects to analyze feature representations of the DNN, \textit{i.e.} the quantity and the quality of knowledge points, the learning speed of different knowledge points, and the stability of optimization directions. In experiments, we diagnosed various DNNs for different classification tasks, i.e., image classification, 3D point cloud classification, binary sentiment classification, and question answering, which verified above hypotheses.