论文标题
诗人箱:电力效率微小的二进制神经元
PoET-BiN: Power Efficient Tiny Binary Neurons
论文作者
论文摘要
神经网络在图像分类中的成功启发了嵌入式平台上的各种硬件实现,例如现场可编程门阵列,嵌入式处理器和图形处理单元。这些嵌入式平台受到限制的功率,主要由乘积积累操作和重量获取的内存访问所消耗。已经提出了量化和修剪来解决此问题。尽管有效,但这些技术并未考虑到嵌入式硬件的基础体系结构。在这项工作中,我们提出了诗人 - 贝(Poet-bin),这是基于查找表的基于查找的功率有效实现,这些实现是对资源约束嵌入式设备的。修改后的决策树方法形成了二进制域中提出的实现的骨干。 LUT访问所消耗的功率远低于其替换的等效乘务操作,而修改后的决策树算法消除了对内存访问的需求。我们应用了诗人 - 培养子体系结构来实施在MNIST,SVHN和CIFAR-10数据集中培训的网络的分类层,并取得了近乎最先进的结果。与最近的二进制量化神经网络相比,与浮点实现相比,分类器部分的能量减少最多可达到六个数量级,最多三个数量级。
The success of neural networks in image classification has inspired various hardware implementations on embedded platforms such as Field Programmable Gate Arrays, embedded processors and Graphical Processing Units. These embedded platforms are constrained in terms of power, which is mainly consumed by the Multiply Accumulate operations and the memory accesses for weight fetching. Quantization and pruning have been proposed to address this issue. Though effective, these techniques do not take into account the underlying architecture of the embedded hardware. In this work, we propose PoET-BiN, a Look-Up Table based power efficient implementation on resource constrained embedded devices. A modified Decision Tree approach forms the backbone of the proposed implementation in the binary domain. A LUT access consumes far less power than the equivalent Multiply Accumulate operation it replaces, and the modified Decision Tree algorithm eliminates the need for memory accesses. We applied the PoET-BiN architecture to implement the classification layers of networks trained on MNIST, SVHN and CIFAR-10 datasets, with near state-of-the art results. The energy reduction for the classifier portion reaches up to six orders of magnitude compared to a floating point implementations and up to three orders of magnitude when compared to recent binary quantized neural networks.