论文标题
尖峰神经元的自然学分学习
Natural-gradient learning for spiking neurons
论文作者
论文摘要
在许多关于突触可塑性的规范理论中,权重更新隐式取决于权重选择的参数化。例如,这个问题与神经元形态有关:突触在功能上等同于它们对体细胞射击的影响可能在脊柱大小上有很大差异,因为它们沿着树突树的不同位置。基于欧几里得梯度下降的经典理论很容易导致由于这种参数化依赖性而导致不一致。这些问题是在Riemannian几何形状的框架中解决的,我们建议可塑性遵循自然的梯度下降。在这一假设下,我们为尖峰神经元提供了一项突触学习规则,该规则将功能效率与解释了几种有据可查的生物学现象(例如树突状民主,乘法缩放和异突触可塑性)的解释。因此,我们建议在寻找功能性突触可塑性时,进化可能会提出自己的自然梯度下降。
In many normative theories of synaptic plasticity, weight updates implicitly depend on the chosen parametrization of the weights. This problem relates, for example, to neuronal morphology: synapses which are functionally equivalent in terms of their impact on somatic firing can differ substantially in spine size due to their different positions along the dendritic tree. Classical theories based on Euclidean gradient descent can easily lead to inconsistencies due to such parametrization dependence. The issues are solved in the framework of Riemannian geometry, in which we propose that plasticity instead follows natural gradient descent. Under this hypothesis, we derive a synaptic learning rule for spiking neurons that couples functional efficiency with the explanation of several well-documented biological phenomena such as dendritic democracy, multiplicative scaling and heterosynaptic plasticity. We therefore suggest that in its search for functional synaptic plasticity, evolution might have come up with its own version of natural gradient descent.