论文标题
水库存储器作为神经计算机
Reservoir Memory Machines as Neural Computers
论文作者
论文摘要
可区分的神经计算机以显式记忆而不会干扰的人造神经网络扩展了人工神经网络,从而使模型能够执行经典的计算任务,例如图形遍历。但是,这样的模型很难训练,需要较长的培训时间和大型数据集。在这项工作中,我们通过模型实现了可区分的神经计算机的一些计算能力,该模型可以非常有效地训练,即具有显式内存而不会干扰的回声状态网络。此扩展使回声状态网络能够识别所有普通语言,包括那些证明是无法识别的综合回波状态网络的语言。此外,我们通过实验证明,我们的模型在几个典型的可区分神经计算机的典型基准任务上的表现与其全面训练的深度版本相比。
Differentiable neural computers extend artificial neural networks with an explicit memory without interference, thus enabling the model to perform classic computation tasks such as graph traversal. However, such models are difficult to train, requiring long training times and large datasets. In this work, we achieve some of the computational capabilities of differentiable neural computers with a model that can be trained very efficiently, namely an echo state network with an explicit memory without interference. This extension enables echo state networks to recognize all regular languages, including those that contractive echo state networks provably can not recognize. Further, we demonstrate experimentally that our model performs comparably to its fully-trained deep version on several typical benchmark tasks for differentiable neural computers.