论文标题

折叠神经网络

Folding over Neural Networks

论文作者

Nguyen, Minh, Wu, Nicolas

论文摘要

神经网络通常表示为通过迭代或通过手动链接方法调用遍历的数据结构。但是,更深入的分析表明,可以使用结构化递归,因此遍历由网络本身的结构指导。本文展示了如何通过将神经网络编码为递归数据类型,然后将其作为递归方案模式进行培训,如何在Haskell中实现这种方法。反过来,我们促进了神经网络的连贯实施,该神经网络在结构和语义之间描述了它们,从而可以在它们的建造方式和训练方式中具有组成性。

Neural networks are typically represented as data structures that are traversed either through iteration or by manual chaining of method calls. However, a deeper analysis reveals that structured recursion can be used instead, so that traversal is directed by the structure of the network itself. This paper shows how such an approach can be realised in Haskell, by encoding neural networks as recursive data types, and then their training as recursion scheme patterns. In turn, we promote a coherent implementation of neural networks that delineates between their structure and semantics, allowing for compositionality in both how they are built and how they are trained.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源