论文标题

深度学习体系结构中泄漏的综合和燃烧神经元的整合

Integration of Leaky-Integrate-and-Fire-Neurons in Deep Learning Architectures

论文作者

Gerum, Richard C., Schilling, Achim

论文摘要

到目前为止,现代机器学习主要基于利用庞大的硬件资源来拟合高维功能以符合巨大的数据集。我们表明,生物学启发的神经元模型,例如泄漏的综合和传火(LIF)神经元提供了新颖有效的信息编码方式。它们可以集成到机器学习模型中,并且是提高机器学习性能的潜在目标。 因此,我们从微分方程中为LIF单元得出了简单的更新规则,这些单程易于在数字上集成。我们采用一种新颖的方法,通过向反向传播监督LIF单位,通过为神经元激活函数的衍生物分配一个恒定值,专门用于反向传播步骤。这个简单的数学技巧有助于在预先连接层的神经元之间分布误差。我们将方法应用于虹膜开花图像数据集,并表明训练技术可用于在图像分类任务上训练LIF神经元。此外,我们展示了如何将我们的方法集成到KERAS(Tensorflow)框架中,并有效地在GPU上运行它。为了更深入了解培训期间的机制,我们开发了我们在线提供的互动插图。 通过这项研究,我们希望通过整合生物学原理来为当前的努力做出贡献。

Up to now, modern Machine Learning is mainly based on fitting high dimensional functions to enormous data sets, taking advantage of huge hardware resources. We show that biologically inspired neuron models such as the Leaky-Integrate-and-Fire (LIF) neurons provide novel and efficient ways of information encoding. They can be integrated in Machine Learning models, and are a potential target to improve Machine Learning performance. Thus, we derived simple update-rules for the LIF units from the differential equations, which are easy to numerically integrate. We apply a novel approach to train the LIF units supervisedly via backpropagation, by assigning a constant value to the derivative of the neuron activation function exclusively for the backpropagation step. This simple mathematical trick helps to distribute the error between the neurons of the pre-connected layer. We apply our method to the IRIS blossoms image data set and show that the training technique can be used to train LIF neurons on image classification tasks. Furthermore, we show how to integrate our method in the KERAS (tensorflow) framework and efficiently run it on GPUs. To generate a deeper understanding of the mechanisms during training we developed interactive illustrations, which we provide online. With this study we want to contribute to the current efforts to enhance Machine Intelligence by integrating principles from biology.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源