论文标题

概率峰值神经网络的多样本在线学习

Multi-Sample Online Learning for Probabilistic Spiking Neural Networks

论文作者

Jang, Hyeryung, Simeone, Osvaldo

论文摘要

尖峰神经网络(SNNS)通过动态,在线,事件驱动的二元时间序列序列捕获了生物学大脑对推理和学习的一些效率。 SNN的大多数现有学习算法都是基于确定性的神经元模型,例如泄漏的集成和传火,并依靠反向传播的启发式近似,该近似是在执行诸如局部范围之类的限制下的时间。相反,可以通过原则性的在线,本地,更新规则直接培训概率SNN模型,这些规则被证明对资源受限的系统特别有效。本文研究了概率SNN的另一个优点,即当在同一输入上查询时,它们产生独立产出的能力。结果表明,可以在推断期间使用多个生成的输出样本来鲁棒性决策并量化不确定性 - 确定性SNN模型无法提供的功能。此外,可以利用它们进行训练,以便获得对日志损失训练标准及其梯度的更准确的统计估计。具体而言,本文介绍了一项基于广义期望最大化(GEM)的在线学习规则,该规则遵循带有全球学习信号的三因素形式,被称为GEM-SNN。标准神经形态数据集的结构化输出记忆和分类的实验结果表明,在增加用于推理和训练的样品数量时,对数可能,准确性和校准方面有了显着改善。

Spiking Neural Networks (SNNs) capture some of the efficiency of biological brains for inference and learning via the dynamic, online, event-driven processing of binary time series. Most existing learning algorithms for SNNs are based on deterministic neuronal models, such as leaky integrate-and-fire, and rely on heuristic approximations of backpropagation through time that enforce constraints such as locality. In contrast, probabilistic SNN models can be trained directly via principled online, local, update rules that have proven to be particularly effective for resource-constrained systems. This paper investigates another advantage of probabilistic SNNs, namely their capacity to generate independent outputs when queried over the same input. It is shown that the multiple generated output samples can be used during inference to robustify decisions and to quantify uncertainty -- a feature that deterministic SNN models cannot provide. Furthermore, they can be leveraged for training in order to obtain more accurate statistical estimates of the log-loss training criterion, as well as of its gradient. Specifically, this paper introduces an online learning rule based on generalized expectation-maximization (GEM) that follows a three-factor form with global learning signals and is referred to as GEM-SNN. Experimental results on structured output memorization and classification on a standard neuromorphic data set demonstrate significant improvements in terms of log-likelihood, accuracy, and calibration when increasing the number of samples used for inference and training.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源