论文标题
KD-LIB:用于知识蒸馏,修剪和量化的Pytorch库
KD-Lib: A PyTorch library for Knowledge Distillation, Pruning and Quantization
论文作者
论文摘要
近年来,神经网络的规模不断增长,导致了大量有关压缩技术的研究,以减轻如此大尺寸的缺点。这些研究工作中的大多数可以分为三个广泛的家庭:知识蒸馏,修剪和量化。尽管在该领域进行了稳定的研究,但提出的技术的采用和商业用法并没有以该速度取得成功。我们提出了KD-LIB,这是一个基于开源的Pytorch库,其中包含来自三个家族的算法的最新模块化实现。 KD-LIB是模型和算法 - 敏捷术,使用Optuna和Tensorboard进行登录和监视的超参数调整的扩展支持。可以在-https://github.com/sforaidl/kd_lib上找到该库。
In recent years, the growing size of neural networks has led to a vast amount of research concerning compression techniques to mitigate the drawbacks of such large sizes. Most of these research works can be categorized into three broad families : Knowledge Distillation, Pruning, and Quantization. While there has been steady research in this domain, adoption and commercial usage of the proposed techniques has not quite progressed at the rate. We present KD-Lib, an open-source PyTorch based library, which contains state-of-the-art modular implementations of algorithms from the three families on top of multiple abstraction layers. KD-Lib is model and algorithm-agnostic, with extended support for hyperparameter tuning using Optuna and Tensorboard for logging and monitoring. The library can be found at - https://github.com/SforAiDl/KD_Lib.